Calman Prussin raised the question of whether MESF values for isotype controls should be subtracted from values for specifically stained cells, pointing out that this procedure has little effect on stained cells with high MESF values, but seems to introduce greater variation when applied to cells with low MESF values, and I am joining in after several others have discussed the pros and cons. It is fairly common practice in analytical chemistry to subtract a value for a blank from a measured value, and also generally accepted that values at the low end of a scale are generally less precise than those at the higher end. Yes, correction does introduce greater variation, but one must expect quantification of numbers of antibody molecules (or anything else) to become less precise, i.e., exhibit greater variance, as the number of objects quantified becomes smaller. Even in a flow cytometer with no noise at all, the variance of measurements is substantially higher in the lower decades of the log scale than in the higher ones, because one is typically collecting only dozens to hundreds of photoelectrons at the PMT outputs. Poisson statistics which apply here, tell us that the standard deviation (SD) associated with a measurement of n photoelectrons is the square root of n; thus, if 16 photoelectrons are collected, the SD is 4, and the coefficient of variation (CV), which, expressed as a percentage, is 100 times the SD over the mean (and the mean here is n) is 25%. If 100 photoelectrons are collected, the SD is 100 * (10/100), or 10%. At the high end of a four decade scale running from 16 photoelectrons to 160,000, the CV in a perfect system, based only on Poisson statistics is 100 * (400/160,000) or 0.25%. Noise, drift, and offsets only make things at the low end worse; the "MESF threshold" of an instrument is essentially the number of molecules of fluorochrome measured when there aren't any molecules of fluorochrome there. When you subtract the low MESF value of the isotype control from a series of low measured values for a stained cell, there is bound to be more variation in the "corrected" values than in the uncorrected ones, but you should remember that neither the uncorrected nor the corrected values are high quality measurements. Exactly the same problem arises in fluorescence compensation, even if it is done by software in the most mathematically correct fashion. Sometimes the correction for a low fluorescence value is greater than the value itself, giving a negative result. Whether compensation is done by hardware or software, what has to happen here is that a negative or zero result is "clipped", electronically or algorithmically, to a low positive value, which has the effect of piling negatives up against the axes. This doesn't look good, and some compensation software actually adds random numbers to the data to make the negatives look prettier, i.e., more like what one is used to seeing. That definitely does degrade the data, but it is generally inadvisable to draw conclusions from so-called "quantitative" data in the first decade, anyway. -Howard
This archive was generated by hypermail 2b29 : Sat Mar 10 2001 - 19:31:02 EST