Does curve height/range of absorbance values matter? - (Sep/25/2012 )
I've been running an assay where the highest standard typically has an absorbance value of 3.0 (using eight standards total and a standard curve calculated as a 4 parameter logistic curve). I received a new batch of some of the reagents I use and the high standard is now reading absorbance value < 2.0 (the whole curve has value shifted down, all the standards now read lower). Will this have a negative effect on my results if all my controls are giving me the same values? Does this makes a difference or does it matter? Why?
In theory it shouldn't matter at all, so long as your sample (not standard) values are being predicted from an equation based on the standards, and within the reference range then they should remain consistent. This is because the absorbance is relative to the concentration and is not an absolute value.
For your controls - are they giving you the same values as before? If so, you have a problem, to which the only solution will be to get (another) batch of reagents.
However, you can check the error in the system by running a known concentration sample with your new standards and seeing what you get.
Yes this can be problematic
During validation of immunoassays for regulated studies precision and accuracy is assessed over the full quantitative range of the assay. The lower limit of quantitation is defined as the lowest concentration that can be measured with acceptable precision and accuracy. Precison and accuracy is assessed by running quality controls multiple times on one assay, and across several assays. Acceptable usually means mean back calculated accuracy within +/- 20% of nominal and precision within 20% (% CV) across all replicates and assays. The concern when the response shifts is that precision and accuracy may no longer work at the lower validated end of the assay (for a response decrease) or at the validated high end of the assay (for a response increase). To protect against this, new critical reagents should be qualified prior to switching, ideally doing a head to head comparison agasint the old reagent, or if the old reagent isn't available, perform a full precision and accuracy assessment with the new reagent. It may be necessary to re-titre the new reagent to get it to work as per the original reagent or validation data. Things like stability, selectivity, linearity etc are most likely not affected by changing to new lots of the same reagents.
Only change one reagent at a time or you end up in an experimental ammunition in an experimental gun situation.
In a non-regulated lab environment, this approach may constitute significant overkill, but just check that your independently prepared (separate from the curve) quality controls at upper and lower limits of the assay range still work well across a few assays.
If you are using TMB, common practice is to monitor development at 650 nm and stop the plate when the high standard reaches a particular level (usually 1 unit at 650) and then the 450 reading will be at about 2.5 units.
Hope this is helpful