The '% of expected' round 'lengths' published on p2pool.info are uniformly distributed, and have a very regular mean of about 160% (p-value for the linear model is less than 10^-12). If the '% of expected' works the way I think it does (normalises round length to D) then I'd expect '% of expected' to be geometrically distributed and the average to be 100%.
I'm think I'm not getting something about the way the '%luck' is calculated?
Raw data is here: http://p2pool.info/blocks
For each block, you can see:
- The actual number of "difficulty 1" shares submitted before the block was found. Note, that since we don't actually know how many "difficulty 1" shares were submitted, this is an estimate of how many difficulty 1 shares should have been submitted based on the average hashrate at the time and the duration of the round. And so if the hashrate published by http://localhost:9332/rate is wrong, then this number of shares will also be wrong.
- The estimated number of "difficulty 1" shares theoretically needed based on the bitcoin difficulty at the time
% expected for a single block is: actual shares / expected shares
% luck over a 30 day window is (sum of expected shares for all blocks found within 30 days) / (sum of all actual shares for all same blocks)
Thanks for all that. If there is an error my guess is it is in the "average hashrate". It might be better to simply count total shares (and dificulty) received by your node.
I will take a closer look at how the avg hashrate is calculated. An error there would mean you are starting with "dirty data".
Note: I am not saying there IS an error just that given the 15% divergence we should take a closer look. Your computation "downstream" from the avg hashrate computation looks valid.
On edit:
A clarification
"ActualShares":545905
then getting avg hashrate over the block?
then duration * avg hashrate = # of hashes?
then # of hashes / 2^32 = # of shares?