It will be measuring the time between TWO particle detections (if time between this interval is larger than prior interval that is a "1" and if it is shorter it is a "0" and if it is equal we throw it out). Quantum mechanics tells us that while the average rate of decay can be calculated the time between each individual decay can not.
Is this a good algorithm?
I know that what seems intuitive is often wrong when dealing with things like this, so I may not be thinking this through correctly...
It would seem that while you cannot know how long it will be to the next detection, there will be an oscillating tendency
Anytime you get a "0", it implies that the time was shorter than the previous detection. While this is not a guarantee that the time is shorter than the average, it certainly is an indicator that the time is more likely to shorter than the average. (If you average all the intervals when you get a "0", and compare it to an average of all the intervals, the average interval when you get a "0" should be shorter than the average of all intervals, shouldn't it?)
The reverse can be said about any instance where you get a "1". This would seem to imply that after a "1", there is a higher than average chance that your next interval will be a "0" (and vice versa).
I suppose for these purposes the bias might not be significant enough to be a problem, but I can't help but wonder if there isn't a better solution.
That is a good point. What I wrote didn't accuracy describe the conversion method and you are right, as described it probably would lead to some alternate bit bias. A better explanation is that each bit will require two independent intervals.
Imagine 3 particle detections a, b, c,
Interval A is the time between a & b
Interval B is the time between b & c
if (A > B)
then bit = 1
else if (B > A)
then bit = 0
// if A = B then no bit is recorded
This requires two counts (events) per bit so the bitrate will be roughly CPM / 120 (CPM = counts per minute). It will actually be less than half due to the portion of the counts which need to be dropped because the intervals match. The amount of the reduction will depend on the accuracy of the clock relative to the average interval.
Some ballpark numbers:
To produce a throughput of 1 kbps (about a 10MB stream of random digits per day) would require a source and tube combination capable of 120,000 CPM and a real time clock with at least 10 microsecond accuracy. Initially I am going to try for a much lower ~100 bps using the microcontroller clock with roughly 1ms accuracy.