efficiency problem - HKG's lower than target genes - (Dec/06/2005 )
This question is for Aimikins, although all answers/suggestions are welcome!!
FYI: I am new (3 months) to RT-PCR.
I read in one of your previous posts (unfortunately I can't seem to find it again), that you spent a lot of time and effort on optimizing your system so that your efficiencies finally fell within 2.5% of each other. What was your systematic approach to reaching this level?
I ask because I first tried optimizing by varying primer and probe concentrations and selecting those that gave me the lowest Ct's and highest delta Rn's. Once that selection was made, I made serial dilutions of my template and analyzed the efficiency - only to find that my efficiencies were worse than when I started!! Would it be better to try all the different primer/probe concentrations on all the serial dilutions? (hope that make sense)
Also, you mentioned that once you obtained your good efficiencies, you got repeatable results. I will be doing RT-PCR on human liver tissue from donors, so I will not be working with something as consistent as a a single cell system. I am worried that since my gene expression (possibly even HKG) can vary based on environmental chemical exposure or drugs, that I will need to go through the efficiency testing every time. Is that true? Would it be better in my case to use the standard curve method?
Thank you so much!
I'm not sure if I can answer your last question, but I can tell you two important things that might help you come to a decision: from what I have seen, the main two factors in efficiency are primer concentration, the leader by a large margin, and then template concentration.
more template (lower Ct) is like more primer...not necessarily better, just more...and too much template can definitely trash out your efficiency. I don't worry about maximizing Ct, I just worry about efficiency level
what I did to get my efficiencies tight, I ran a plate with varying dilutions of template and primer, but not nearly as many as they tell you to on ABI's website (after all, our lab is po' and I was hoping to get lucky). I did not mess around with more of the forward and less of the reverse primer, and all that crap...this is essentially how I set it up, and please note that I stabbed in the dark with my initial concentrations of template with my housekeeper until I got a range that gave me good amplification, then compared the others:
I ran a range of template dilutions (everyone quantifies this differently, but mine are based on ng total RNA added to the RT reaction to make cDNA) from approximately 1 ng up to 30 ng, with at least 5 or 6 dilutions for each primer set. I found, with my system, that my primers are all good if I add between 10 and 20 ng from the RNA calculation (are you with me? this is a PIA to follow). If I am outside that range, I see dimers, poor amplification, irreproducible results, etc. some primer pairs are more efficient at a wider range, but they are all good within that window so I adjust all my templates to be within that range every time I do a qPCR run. When I am adding a new primer pair, I run about 5 different primer concentrations (I use 200nM each pair as a 'start', like you would probably do with regular PCR, and dilute up or down from about .25X to about 2X). I take the primer concentration that yields good efficiency within that magic 10ng-20ng of template.
also, my mRNA levels rarely change by more than a factor of 10 or so. if the genes you look at will vary more than that, you might have to work harder to find the 'magic template dilution' in which your primers should always give good efficiency
am I making any sense, or did I just make it worse?
Thanks so much for your reply! I think it all makes sense, but I would like clarification on how you determine efficiencies. From what I have seen so far, it seems that efficiency can be determined in 2 ways (please correct me if I'm wrong!):
1. Within a single sample, using the log-linear phase of the PCR amplification curve or
2. Using multiple samples diluted over a range of 5-6 logs.
I have been using the latter method to calculate my efficiencies. That means I test, for example, a range from 0.2 ng to 100 ng cDNA (also based on starting ng RNA) using 300 nM primers and 250 nM probe. Then, I test it again using 900 nM primers and 250 nM probe. I calculate efficiency by plotting Ct vs. log ng cDNA and obtaining the slope. I calculate relative efficiency by plotting delta Ct (Ct of target - Ct of HKG) vs. log ng cDNA.
If I am calculating efficiency this way, then varying the amount of template isn't really helpful is it? Because each point on my efficiency curve has a different amount of template by definition. What is your opinion on the method I'm using?
I don't know. I would be looking at a lot of bad amplification if I looked at my samples over such a large range
I got ABI's "user bulletin #2" and followed their little Raji DNA example; the values I found empirically
I use multiple samples (your method 2) but I do not dilute them across as large a range as do you. I do probably 8-10 dilutions in that smaller range to plot my efficiencies
Ah, maybe that is my problem - my range is too large! I will try again with a smaller range.
Thanks for your help!
Ok, I have tried again with a smaller range (1 to 30 ng) and got poor efficiencies. (I am trying to optimize 2 target genes and my reference gene all at once - maybe that is too ambitious?) I think I need a lot of template for one of my target genes because it is in low abundance. But is it acceptable to use one amount of template for the reference gene and another amount for the target gene? It doesn't seem like that is ok..
I also tried again using a smaller range of dilutions, but with higher amounts of template (10 to 100 ng - still too large a range??). I also tried 3 different primer concentration sets for 2 different target genes in this experiment. All crap!
The frustrating thing is that I got decent efficiencies (although too far apart for ddCt) in the experiment I ran before optimizing. Now, I can't even seem to repeat that! That makes me worried that none of my data are believable, that one of my target genes is simply expressed in too low amounts to give me consistent results (it ranges around 33-39 Ct), and that I can't pipette.
Any suggestions? And thanks for letting me vent. No one around here has a clue what I'm trying to do.
I just want to add that I re-read some of your previous advice, and I think I should start by optimizing my HKG before anything else...
So now as I work on that, I was wondering - I believe using 1-30 ng is too low an amount (I got very high Ct's last time for my HKG with these amounts), and I want to try using a dilution range of 30-80 ng. Do you think it is a problem that 1-30 ng covers 1.4 logs, while 30-80 ng covers only 0.4 logs?
Ok - another efficiency question for anyone:
Once you determine that your efficiencies are good within your system, do you need to test it now and again? How often? Do you ever find that it changes?
Soluene - I do not purposely re-test my efficiencies on a schedule, but it sort of happens anyways. every so many months, my PI or I will decide to add another gene to the reportoire of what we are studying...when I optimize the 'new' primer set, I always repeat the housekeeper alongside and calculate the efficiencies for both. if I found a big discrepancy in the housekeeper compared to prior runs, I think I would get pretty nervous...I have been lucky so far, and I consider it to be a pretty good check of the system
another thing, I can't remember if I've already told you this...I realize it's different because I dont' use probes, I use SYBR green, but out of the 9 genes we have tested and the two housekeepers, there have been three where it was very difficult to find a good primer set. by 'very difficult' I mean that I ran like two entire plates trying to find a good combination and got nothing but crap...at that point I designed and ordered up a new primer set for two of our test genes and started over...for the one housekeeper, I just scrapped it and stuck to the one that worked better for me
OH, and a question that I did not read before...I do not think it is OK to use different amount of templates for different genes; I don't think you can do relative quantification if you set it up that way. I do not have a literature reference or anything, but it seems like it would not be reliable...I would like to know if anyone has tried it and gotten reproducible results? I feel like so much of this relative stuff is pretty arbitrary anyways, every little piddly thing you can control makes your results better...and anything that is not as rigid will likely skew your results
I have two genes that amplify in a much lower range and give pretty high Ct's, but as long as the efficiency checks out I try not to let it stress me...but I do notice that to get good statistics and reproducibility over time it is good to do lots of replicates. it seems if there is only a small amount of expression to begin with, a little change makes a larger difference in your standard deviations and such
"So now as I work on that, I was wondering - I believe using 1-30 ng is too low an amount (I got very high Ct's last time for my HKG with these amounts), and I want to try using a dilution range of 30-80 ng. Do you think it is a problem that 1-30 ng covers 1.4 logs, while 30-80 ng covers only 0.4 logs?"
IMHO, I would say it is not such an issue, as long as that 30-80 ng range for your housekeeper corresponds to the amount of cDNA you can test for your targets...does that make sense? My efficiencies are tightest between 8 and 25 ng RNA (calculated by how much goes into the cDNA reaction and how much cDNA I use as template...I figure this number is different for everyone because it is hardest to control the RT reaction). I do my damndest to make everything exactly the same as far as how my samples are treated, from the time I treat my cells to the time my qPCR plate goes into the machine, and I know what Ct value for my housekeeper to expect if everything worked OK.
does this make sense? does this help?
Thanks for your long reply, Yes it does make sense and yes it does help! I'm starting efficiency experiments again next week (hopefully), and I will go over your advice again on Monday.
You've been so generous with your answers - I really appreciate it! I have one more q: How long did it take you to optimize? I have been working for 2-1/2 months now, and I think my boss is getting anxious that it is taking so long! It seems as though optimizing the system takes a while, and in addition I have had 2 problems: one is that, like I said in another post, I only recently had a way to test my RNA quality, and had been poor; two is that I have been learning RT PCR from scratch along the way! Can anyone tell me if my time frame is normal? Thanks!