I wanted to confirm that the data rate would be proportionate to the
concentration of beads in a sample. I started with a bead sample of 1.3 x 10(8)
and did 1:1 dilutions down to 2.08 x 10(6). The chart shows what happened.
Concentration Actual data rate Theoretical data rate
1.3 x 10(8) ND 32,000
6.5 x 10(7) 4000 16,000
3.25 x 10(7) ND 8,000
1.67 x 10(7) 2400 4,000
8.35 x 10(6) 1800 2,000
4.17 x 10(6) 1000 1,000
2.08 x 10(6) 500 500
It seems to me that assuming that the pressures remain constant, tubing lengths
etc, the rate should follow the same pattern as the concentrations. Is this
not reasonable? Has anyone tried a similar experiment? Is anyone willing to
try and post the results? I'm well aware that my dilutions may be somewhat off,
but I don't think that can adequately explain the results. A few more details-
I did this with 2 micron microspheres on an Elite with sheath pressure 12psi and
laser power to 15mW. (I need the small particles because the ultimate goal is
to do this with bacteria.)
Thanks for any and all suggestions!
Kris Weber
PS next week I will try this again, maybe with larger particles at lower
concentrations, and also repeat this experiment with varying the sheath
pressure.
![]() |
![]() |
![]() |
![]() |