Author

Topic: Hash rate spikes and difficulty analysis (Read 2125 times)

full member
Activity: 136
Merit: 100
May 23, 2014, 10:53:55 PM
#6
Actually that spread is smaller than I would have expected intuitively.

I'm now pretty happy with the simulation I mentioned - it's simulating 2 years of mining where the hashing rate remains completely constant. Some of the difficulty changes that happen are quite surprising because we can get compound "good luck" and compound "bad luck" both combining so that when the system snaps back to closer to its anticipated behaviour we can get surprisingly large difficulty changes. I suspect that this may account for what you were expecting to see?

I'm going to try to publish this tomorrow.
full member
Activity: 136
Merit: 100
Actually that spread is smaller than I would have expected intuitively. How did you calculate it? Did you base it on the assumption that on  average a block is found every x minutes (and then applying a poisson distribution), or did you use crunch the numbers calculating for every GH/s the chance of finding a block ? The result might (should?) be the same, but Im a little hesitant to accept that as fact.

I believe the spread is correct, but this one is based on a Poisson distribution. The calculation is based on a C program  I wrote that uses the MPFR library (I couldn't find an online calculator that could handle 2016! without a huge loss of precision :-))

I've been building another simulation and wasn't overly happy with the way I derived some predictions using the table so I've now got this based on running random simulations of mining and using an exponential probability (which should give the same Poisson behaviour but lets me be more precise about the 2016 block chunks).

While I've not run a statistical correlation of both approaches the simulation results are the same both ways.
legendary
Activity: 980
Merit: 1040
Actually that spread is smaller than I would have expected intuitively. How did you calculate it? Did you base it on the assumption that on  average a block is found every x minutes (and then applying a poisson distribution), or did you use crunch the numbers calculating for every GH/s the chance of finding a block ? The result might (should?) be the same, but Im a little hesitant to accept that as fact.
legendary
Activity: 1258
Merit: 1027
The full analysis is over at: http://hashingit.com/16-hash-rate-headaches

Great Article, thanks!

Quote
Hash Rate Calculators

Hash rate calculators have a huge problem as a result of the randomness shown by the statistics. All they can do is measure the event rate and make an estimate of the rate, based on the block finding rates. They have no way of telling if the statistics for any given period of time were normal, low, high, very low, very high, etc.
sr. member
Activity: 291
Merit: 250
+1 . Every conspiracy theorist and "hashrate just jumped!" analyst should print that chart. Pitty is the problem is too mathematical thus 95% of population has no idea what it is about anyway Cheesy
full member
Activity: 136
Merit: 100
I finally got a few hours to finish up some work on block finding probabilities and how that affects any attempts to estimate the instantaneous hash rates. The results surprised me as they show that we're very likely to underestimate or overestimate hash rates over any fairly short period of time (e.g. a few hours or even a day). Even when we get out to looking at a 14 day period, statistical randomness can end up playing a much bigger impact on individual difficulty changes than we might expect. It's actually quite likely that individual difficulty changes could be "wrong" by several percent and it's equally possible that it could take quite a number of subsequent changes to correct for it.

The full analysis is over at: http://hashingit.com/16-hash-rate-headaches

Here's the probability curve for a 14 day period in which the hash rate actually stays constant:



It's surprisingly likely that randomness will lead us to being above the 90th centile or below the 10th (roughly once every 5 difficulty changes) so there's far more noise in the difficulty numbers than we might hope. Long term trends are pretty good, but short term changes have far more random volatility in them. As an example even if the hash rate was totally static we have a very high probability that once every 2 years there will be either a +5%/-5% difficulty change and that every 5 difficulty changes we would expect to either see a +2.8%/-2.8% difficulty change
Jump to: