Author

Topic: Audit your pool with better stats (Read 974 times)

hero member
Activity: 836
Merit: 1021
bits of proof
August 28, 2015, 02:49:00 PM
#6
1/2500 = 0.04% is far from impossible. Not even really that unlikely.

*Lots of pointless dribble*

1 in 2500 chance that any block a pool solves is 99.96% CDF or worse.  That is a fact.  Slush has solved just under 25,000 blocks.  Over an infinite number of samples of pools with 25,000 blocks solved, you would expect the mean number times the pool encountered a 99.96% CDF or worse block to be 10.

Now if you go through slush's history and find 20 times that there were 99.96% CDF (or worse) blocks, then you might have something.  Or maybe if you find that in the last 5000 rounds it has happened 5 times, you might have something.

Otherwise you're trying to claim something shouldn't happen when it clearly SHOULD and WILL happen multiple times in the lifetime of that particular pool.

I have not claimed it should not happen, but that the event is less likely than temporary technical problems.
I also presented a model for the appropriate trigger levels of investigation, since a pool operator who has to explain production outcome to investors needs a model more precise than ..... your dribble.


legendary
Activity: 1750
Merit: 1007
August 27, 2015, 01:12:28 PM
#5
1/2500 = 0.04% is far from impossible. Not even really that unlikely.

*Lots of pointless dribble*

1 in 2500 chance that any block a pool solves is 99.96% CDF or worse.  That is a fact.  Slush has solved just under 25,000 blocks.  Over an infinite number of samples of pools with 25,000 blocks solved, you would expect the mean number times the pool encountered a 99.96% CDF or worse block to be 10.

Now if you go through slush's history and find 20 times that there were 99.96% CDF (or worse) blocks, then you might have something.  Or maybe if you find that in the last 5000 rounds it has happened 5 times, you might have something.

Otherwise you're trying to claim something shouldn't happen when it clearly SHOULD and WILL happen multiple times in the lifetime of that particular pool.
hero member
Activity: 836
Merit: 1021
bits of proof
August 27, 2015, 02:17:04 AM
#4
1/2500 = 0.04% is far from impossible. Not even really that unlikely.

Not only is it not unlikely, it's extremely likely and has SHOULD have happened multiple times.

It's not 0.04% of it happening ever in the lifetime of the universe.  It's 0.04% chance *on any block* that the pool solves.

Yes it is 1/2500, that means it is expected once in 7 years within the continous operation of a pool of that size.

No, it is not linked with lifetime of the universe or with *any block*.

That gap was confirmed by slush, it is not a problem on his website.

This was an improbable event, lets not argue about the adjectives.

What I claim is:
The assumption it was linked to e.g. a technical problem, a withholding attack etc. is much stronger than that it was bad luck.
Because having e.g. technical problems more often than once in 7 years is quite feasible.

It was also disappointing if the discussion of this topic was about adjectives and Slush' operation and not about using
more advanced metrics for pool audit.

I used the above probability metrics while running the mining operation of CoinTerra, (was 3-5% of the total network in 2014)
and they were helpful to define error levels where an investigation of infrastructure was triggered.

The below graph shows the alert levels we used. The trigger was no block for n hours (y axis) with a maret share as of x axis.

The lines are:
- blue (watch) : check systems
- red (alert): elevated manual checks
- yellow (panic): search for a problem until you find it (You see that slush' example is deep in that range)

One might set different levels but the shapes should be ike this. We applied the model overall and on data centre level.


 
legendary
Activity: 1750
Merit: 1007
August 26, 2015, 04:36:51 PM
#3
1/2500 = 0.04% is far from impossible. Not even really that unlikely.

Not only is it not unlikely, it's extremely likely and has SHOULD have happened multiple times.

It's not 0.04% of it happening ever in the lifetime of the universe.  It's 0.04% chance *on any block* that the pool solves.  Slush has probably around 25,000 blocks under its belt in the history of the pool.  Meaning it could have happened 10 times on Slush and it would be *exactly* within expectations.  And with such an unlikely event and a sample size of only ~10x the expected rate of occurence, even if the pool has had 12 rounds at that 99.96%+ CDF, it would not be a statistically significant deviation from expectation to cause alarm.


(Note:  25k is probably a conservative estimate used for illustration purposes, estimated by using how many blocks BTC Guild solved in its lifetime, which is shorter than Slush's pool lifetime, but BTC Guild was larger than slush for the majority of its existence, so they're probably similar in terms of total blocks solved).



EDIT:  Actual number:  24,580 based on the Slush website block ID.  I know there's been a few IDs that were skipped due to a block being put into the database multiple times, so lets say 24500.  Pretty much changes nothing stated above.
member
Activity: 285
Merit: 10
August 26, 2015, 10:16:45 AM
#2
1/2500 = 0.04% is far from impossible. Not even really that unlikely.
hero member
Activity: 836
Merit: 1021
bits of proof
August 25, 2015, 12:21:37 PM
#1
We know that the expected production of a mining pool is linearly proportional to its market share (x%), it is 24*6*x% blocks per day.
 
The actual production however follows the Possion distribution for reasons you find in the first paragraph of https://en.wikipedia.org/wiki/Poisson_distribution
 
"... the Poisson distribution ... expresses the probability of a given number of  events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event."

The probability of not finding a block within a time span in which one would expect n blocks is simply e^-n
Remark: This is the CDF with i = 0 and lambda = n

This means one can assign probability to an observed production outcome and quantify how likely is it.
A probability measure is more informative than "luck" used on many sites and you only need a pocket calculator for the check.

Historic Example:

Slush did not mine a single block for two consecutive days between 19 and 21. Jun 2015 while reporting 9.516 PH/s miner at pool.
see https://mining.bitcoin.cz/stats/blocks

The difficulty implied a network total in the same period was 355.711 PH/s, see https://bitcoinwisdom.com/bitcoin/difficulty
This translates to a historical market share of 2.68%

With that market share one would have expected 2*24*6*0.0268 (means nearly 8 ) blocks in two days.

How probable was not having any blocks in the same time just bad luck? 

Exp[-2*24*6*0.0268] = 0.04%

for me that falls into the practically impossible bucket.


Jump to: