I wrote that calculator.

...

If anyone has a better understanding of the math and sees a mistake, let me know!

Lachesis, I wasn't trying to suggest any problem with your calculator, just with the way that some people interpret its output.

Marketmaker, I didn't mean to come across as hostile or unhappy. I wouldn't be surprised if there is some flaw - like I said, stats isn't my thing. I agree that some people don't realize that you're never "making progress" towards success - each roll of the dice has the same probability as the one before it.

NewLibertyStandard, khps should be fully accounted for by my math. The per-hash probability is very simple (p=target/2^256).

Across a small enough timespan (1 second), the probability is linear (the principle of local linearity):

p = (hashes / second) * (p of winning with one hash)

p = (khps*1000) * (target/2^256)

Across a longer timespan, the Poisson distribution comes into play, giving us that exponential term. The "Average" time on the calculator treats the problem like it's a linear distribution, so it will never be as accurate as the 50% and 95% numbers.

Basically, if you _don't_ generate at least one block in the 95% timespan, then you're very, very unlucky. The chances of generating a block in the 50% timespan are the same as flipping a coin and getting heads - even odds.

I'm not familiar with the higher level math, but the idea is something like this, if you've got two people, one with a single one hundred sided die and the other with 100,000 one hundred sided dice and give them both 100 seconds to be the first to roll 00, and then repeat the exercise over and over again, the single die has the same probability to to roll a 00 as any other individual die, so he should win on average once every 100,001 iterations, but in practice, he will win much less often. I can't explain the specific mathematics, but I'm quite sure it is the case. It's certainly within the realm of testability if you had two machines that did rolls at the same rate and recorded the results. The person with the many dice will roll a 100 pretty much every roll, so even if the person with the single die rolls a 00 on the first roll, he'll then only have one chance out of how many total dice stop on 00 of having his die stop before one of the dies of the competitor stops on 00. The scale is totally off from Bitcoin, but I believe that the principle still applies. A client with 100,000 khash/s will generate more blocks per hash than a single client with 1 khash/s.

Edit: Changed the last sentence.

Part of the concept that I'm trying to express is that the probablity of a combined series of events as a whole is calculated differently than how the probability of a single equal event is calculated. Because Bitcoin is doing millions of hashes per second or every few seconds, that needs to be taken into account.