Author

Topic: [4+ EH] Slush Pool (slushpool.com); Overt AsicBoost; World First Mining Pool - page 685. (Read 4382714 times)

hero member
Activity: 886
Merit: 1013
18347    2013-06-02 19:34:47    0:47:47    7511826    318    0.00096651    239329    25.17718286

18346    2013-06-02 18:47:00    0:51:59    8189628    329    0.00052418    239319    25.04053933

NOT complaining, just wondering if anyone else is seeing a lesser reward on this round as it doesn't seem to be getting corrected?



It's alright for me.

I had a bit less than usual on 18343   2013-06-02 16:33:13   0:00:23   50946   150   0.07358613

otherwise good times Smiley

member
Activity: 61
Merit: 10
18347    2013-06-02 19:34:47    0:47:47    7511826    318    0.00096651    239329    25.17718286

18346    2013-06-02 18:47:00    0:51:59    8189628    329    0.00052418    239319    25.04053933

NOT complaining, just wondering if anyone else is seeing a lesser reward on this round as it doesn't seem to be getting corrected?

newbie
Activity: 56
Merit: 0
no registration on block 239281

member
Activity: 83
Merit: 10

Can you explain this? PM me if you don't want to post publicly.

(...)

Great! Have you been able to calculate 'c' yet? Having this will allow you to back calculate shares which will make figuring out the renormalisation easier.

(...)

I thought there was one somewhere. Maybe not. I'll try to find it, if it exists.

(...)

Since 'c' can't be changed without completely changing the 'hoppability' of the pool, you are in effect saying that given this restriction proper renormalisation or rescaling can't occur? Hmmm. Have to think about that.

You've obviously done a lot of work - you should post your results somewhere on the forum, as a work in progress. I'm keen to see what you have.

I will send you a PM. I tried to look for how exactly the rescaling works, but have not found detailed docs. I would really appreciate a link in case you have it handy.
Regarding reverse engineering the current value of C, I have read your blog post on the topic (http://organofcorti.blogspot.hu/2012/09/43-slushs-score-method-and-miner.html), kudos to you, I enjoyed the read.

My idea was changing C on each rescaling, then resetting it to the original value when a new round starts. This would indeed affect the hop point. Honestly, I have not dived deeply into the hopping aspect of changing C intra-round yet, my first point of interest was checking how variance is correlated with the time elapsed since the last renormalisation with and without changing C.

I would avoid going public prematurely, the community might be a bit harsh and I do not want to raise a flame war before I can properly back the points. I'll PM you, you are way more experienced with this type of statistical analysis than me so any idea/comment is welcome.

Cheers,
   T
full member
Activity: 237
Merit: 100
Compare this block info:
18339    2013-06-02 14:00:37    2:02:30    19604110    328    0.00000000    239277    25.12461000    95 confirmations left
18331    2013-06-02 04:22:48    2:08:01    19975253    314    0.00046448    239200    25.06660003    18 confirmations left

On 18339 i have stopped the mining on half time, and my previous shares value is 0 ?!
On 18331 i have mined the whole block, but with only one worker.

Thats fair?
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Thanks for the response, you are absolutely right. I do pull the statistics via the JSON API every 10 minutes and analyse & plot it automatically every 6 hours. (I do not want to pull more frequently for practical reasons.)

At this point in time, based on stats from the last 2 months, what I see confirmed is the "magical fix" manual recalculation via PPS which is easy to spot and slush has been honest about it when I contacted him (he is very open and honest, and I do understand him not wanting to actively post here recently).

Can you explain this? PM me if you don't want to post publicly.

I do not think that based on my current amount of data I could back my assertion beyond a doubt, this is why I was very explicitly calling it a guess. I could collect data and perform "black box analysis" for any amount of time and still not reach 100% as it is theoretically impossible to reach 100% via passive black box analysis but the confidence level is growing with the amount of data collected.

Great! Have you been able to calculate 'c' yet? Having this will allow you to back calculate shares which will make figuring out the renormalisation easier.

Checking the code (aka white box analysis) could be much less effort taking and allow for a statement beyond doubt. And honestly, I have never even seen a precise description of what exactly is done on a renormalisation, only the fact is mentioned that a renormalisation is periodically performed. In contrast to this, Meni Rosenfeld is very explicit in how rescaling should work when using DGM (https://bitcointalksearch.org/topic/double-geometric-method-hopping-proof-low-variance-reward-system-39497).

I thought there was one somewhere. Maybe not. I'll try to find it, if it exists.

Regarding changes to C: what I mean is that on rescaling/renormalization, when the score is changed, but C is not, then you very aggressively change the weight of all the work one has performed before renormalisation versus the weight of the new per share increment. However, if the score is divided by X, and C multiplied by log(X), then the value of previous scores relative to the value of the increment score for the new share is kept the same (maintaining the exponential semantics used by slush), and you will end up with the same exponential curve, just rescaled (zoomed out).

Since 'c' can't be changed without completely changing the 'hoppability' of the pool, you are in effect saying that given this restriction proper renormalisation or rescaling can't occur? Hmmm. Have to think about that.

You've obviously done a lot of work - you should post your results somewhere on the forum, as a work in progress. I'm keen to see what you have.
KNK
hero member
Activity: 692
Merit: 502
Proper normalization is time (CPU) consuming and is very difficult to do on several nodes simultaneously, that's why i also think it is 'reset' instead of normalization.
My guess is that:
On reset a signal is sent to all nodes (but some do not see the message and that's where things go wrong)
Each node recalculates per miner score based on his last share
The central node / database sums all the scores sent from the nodes
Now if one node does not reset it's data we have all miners affected except those on that node.
In such cases the total score remains high after reset and this can probably be used as a trigger (more of flag actualy) for recalculation if the round is not long enough to cause another reset and thus clear the 'recalculation needed' flag if successful.
member
Activity: 83
Merit: 10

OrganOfCorti,

I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case.Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward.
No, no-one I know has audited slush's code. So any claims that the renormalisation is broken need to be proven - anecdotal evidence is insufficient. You'll need to prove your assertion beyond a doubt, not just show examples of when you think it misfires. ou could be correct - why not actually do the groundwork and prove it?

My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores.

If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case.

'c' should never be changed. If it did, the score method wouldn't work.

In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.)

Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever.

Cheers,
   T

You think something's wrong - fair enough. But now you need to prove it. You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.

Good luck!



Thanks for the response, you are absolutely right. I do pull the statistics via the JSON API every 10 minutes and analyse & plot it automatically every 6 hours. (I do not want to pull more frequently for practical reasons.)

At this point in time, based on stats from the last 2 months, what I see confirmed is the "magical fix" manual recalculation via PPS which is easy to spot and slush has been honest about it when I contacted him (he is very open and honest, and I do understand him not wanting to actively post here recently).

I do not think that based on my current amount of data I could back my assertion beyond a doubt, this is why I was very explicitly calling it a guess. I could collect data and perform "black box analysis" for any amount of time and still not reach 100% as it is theoretically impossible to reach 100% via passive black box analysis but the confidence level is growing with the amount of data collected.

Checking the code (aka white box analysis) could be much less effort taking and allow for a statement beyond doubt. And honestly, I have never even seen a precise description of what exactly is done on a renormalisation, only the fact is mentioned that a renormalisation is periodically performed. In contrast to this, Meni Rosenfeld is very explicit in how rescaling should work when using DGM (https://bitcointalksearch.org/topic/double-geometric-method-hopping-proof-low-variance-reward-system-39497).

Regarding changes to C: what I mean is that on rescaling/renormalization, when the score is changed, but C is not, then you very aggressively change the weight of all the work one has performed before renormalisation versus the weight of the new per share increment. However, if the score is divided by X, and C multiplied by log(X), then the value of previous scores relative to the value of the increment score for the new share is kept the same (maintaining the exponential semantics used by slush), and you will end up with the same exponential curve, just rescaled (zoomed out).

Cheers,
   T
donator
Activity: 2058
Merit: 1007
Poor impulse control.
You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.

Good luck!

You just don't read do you?
I have been collecting and storing block stats for two months and am not just making this up.

You are so sure in what you think that every one else is just wrong and your theory is better than the other's observations based on facts.

TiborB hasn't produced a proof, just anecdotal evidence. He needs to show exactly what should have happened if a renormalisation rather than a "reset" occurred for his dataset.

That's why i didn't want to argue with you few pages back and i won't even now.

Good-oh!
KNK
hero member
Activity: 692
Merit: 502
You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.

Good luck!

You just don't read do you?
I have been collecting and storing block stats for two months and am not just making this up.

You are so sure in what you think that every one else is just wrong and your theory is better than the other's observations based on facts.
That's why i didn't want to argue with you few pages back and i won't even now.
donator
Activity: 2058
Merit: 1007
Poor impulse control.

OrganOfCorti,

I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case.Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward.
No, no-one I know has audited slush's code. So any claims that the renormalisation is broken need to be proven - anecdotal evidence is insufficient. You'll need to prove your assertion beyond a doubt, not just show examples of when you think it misfires. ou could be correct - why not actually do the groundwork and prove it?

My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores.

If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case.

'c' should never be changed. If it did, the score method wouldn't work.

In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.)

Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever.

Cheers,
   T

You think something's wrong - fair enough. But now you need to prove it. You'll need lots of data, and you'll have to scrape slush's site every few minutes. But it's doable.

Good luck!

newbie
Activity: 37
Merit: 0
Honestly. The reset thingy is just another bug caused by arithmetic overflow. This pool is so full of bugs I'm not sure it can be called a pool any longer. It's something between Satoshi Dice, a daycare center and a mining pool.

Is it a bingo hall?
vs3
hero member
Activity: 622
Merit: 500
Now what!!!???
18329    2013-06-02 01:07:40    0:20:23    3210872    334    0.00273237    239177    25.14480000    invalid

This was already a known fact:

239177 orphaned.

Grr etc. oh well, at least it isn't because some flawed logic in an equation
member
Activity: 83
Merit: 10
Exactly and you should reset the score for all miners at specific time, because the total score is a sum of all the miner's scores

EDIT: I guess you have missed my previous post to you - https://bitcointalksearch.org/topic/m.2330839

I see, but u should understand, when u reseting all scores, then we lost information.
That means when the all score is reset at 1:00:00 then its all newly, what i have shared before i would that lose. Because i would have then 0 score.
The users score should NOT be reseted!

Perhaps you missed this:

The "reset" just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.

Or maybe you didn't understand?
Then, i think i dont understand.
For me says reset thah reseting scores to zero.

OK, then reset your idea about "reset" : it just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.

Let hear no more about it, eh?




OrganOfCorti,

I have read many of your writings and comments here and I am convinced you do know very well what you are talking about. However, with all respect I am not sure you are 100% right in this case. Yes, what you write is how it should work, but I wonder if you have seen and analysed the actual code performing the renormalization. What we know from experience is, that a round, that ended very closely after a renormalisation, yields rewards being off by magnitudes, which affects some miners in a positive, others in a negative way. It seems, the closer to a renormalisation the round ends, the larger the deviation from the expected avg reward.

My guess (which is only a guess based on observations) is that constant C in the formula is not changed after the renormalisation to match the scale-down factor of existing scores. If it was correctly changed, then a share submitted right after a renormalization would have the same impact/importance, than the share submitted right before the renormalization. This does not seem to be the case.

In really extreme cases, where the deviation is significant (e.g. only a few satoshis for continuous miners even at around 1Gh/s), Slush recalculates the blocks manually with PPS. You can see that after a "magical fix" that we see in some cases your score is equal to the number of shares you submitted. (I have been collecting and storing block stats for two months and am not just making this up.)

Please let me know if you think I made a logical mistake in my conclusions, I am always open to learn. And it also belongs to learning to periodically revisit and challenge previous theories and statements. So no offence is intended whatsoever.

Cheers,
   T




donator
Activity: 2058
Merit: 1007
Poor impulse control.
To be quite honest - I'm not sure why is there the need of "reset" in the first place? I mean - why set it to zero? Why not just set it to half of what it is - that way the proportional part for everyone would stay the same (e.g. each miner's contribution % would not change)?


p.s. someone suggested earlier that it is a double-integer - that's probably wrong as it seems to be a floating point type (most likely whatever the default python/php one is)


The "reset" just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.


THERE IS NO "RESET TO ZERO".
sr. member
Activity: 337
Merit: 250
Now what!!!???
18329    2013-06-02 01:07:40    0:20:23    3210872    334    0.00273237    239177    25.14480000    invalid
vs3
hero member
Activity: 622
Merit: 500
To be quite honest - I'm not sure why is there the need of "reset" in the first place? I mean - why set it to zero? Why not just set it to half of what it is - that way the proportional part for everyone would stay the same (e.g. each miner's contribution % would not change)?


p.s. someone suggested earlier that it is a double-integer - that's probably wrong as it seems to be a floating point type (most likely whatever the default python/php one is)
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Exactly and you should reset the score for all miners at specific time, because the total score is a sum of all the miner's scores

EDIT: I guess you have missed my previous post to you - https://bitcointalksearch.org/topic/m.2330839

I see, but u should understand, when u reseting all scores, then we lost information.
That means when the all score is reset at 1:00:00 then its all newly, what i have shared before i would that lose. Because i would have then 0 score.
The users score should NOT be reseted!

Perhaps you missed this:

The "reset" just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.

Or maybe you didn't understand?
Then, i think i dont understand.
For me says reset thah reseting scores to zero.

OK, then reset your idea about "reset" : it just normalises all the scores. No information is lost. If no "reset" happened, you'd receive the same score and the same reward.

Let hear no more about it, eh?


newbie
Activity: 34
Merit: 0
Slush any news on blocks
18309 and 18318 in regards to miscalculations?
thanks
Jump to: