Pages:
Author

Topic: Analysis of Bitcoin Pooled Mining Reward Systems - page 3. (Read 36483 times)

donator
Activity: 2058
Merit: 1054
Chapter 6 done.
donator
Activity: 2058
Merit: 1054
Btw looked at coinotron's score system?
Code:
score<-score+exp(C*time to submit share/duration of round)
Why do people keep making up these things? There are reward systems which actually work, why not use that instead? From their site - "anti-cheating score system punishing pool-hooping cheaters" - the fact that they think pool-hoppers should be "punished" (rather than simply using a fair invariant method) or that their method does this with any effectiveness, demonstrates they don't know what they're doing.

It may be "resistant" to changes in hashrate and difficulty, but not immune. It has a different variance/hopping profile from slush's, with at least one significant disadvantage - the profitability has no lower bound for arbitrarily long rounds.

which means the score has a maximum of exp(C) and the final pool score will be (total submitted shares)/(C*exp(C)) if you assume a constant hashrate for the pool.
Probably should be (total submitted shares*exp(C))/(C).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Nice work on your latest edit. Still reading it and working through the math bit by bit (heh) and especially loving the appendix on hashrate increase for a slush scored pools.

Btw looked at coinotron's score system?
Code:
score<-score+exp(C*time to submit share/duration of round)
which means the score has a maximum of exp(C) and the final pool score will be (total submitted shares)/(C*exp(C)) if you assume a constant hashrate for the pool.

It's interesting - it increases the expected share efficiency 1.0 point to about .73*D but has much lower overall increase in expected share value for that range and never gets above about 1.25 at maximum. The expected round efficiency at that point is also quite low. Also as far as I can tell it's resistant to changes in hashrate and D unlike Slush's score. A nice find, that one (plus it was easier to integrate the function than Slush's Smiley).

donator
Activity: 2058
Merit: 1054
For example, over a period of n rounds the buffer will change in an amount on the order of sqrt(n) times the block reward. The probability that it will take at least n rounds to recover from a negative buffer of m times the block reward is roughly (m/sqrt(n))*sqrt(2/Pi). (From which it follows that the expected time to recovery is infinite.)
I'm not sure I follow this - if m is large and n is small, you get a large probability of the negative buffer recovering quickly. Can you provide an example for me?
The approximation only holds for sufficiently large n and m. So you can't use it directly to find the probability of quick recovery. But you can do it in reverse. For example, m=10, n=1000 gives a probability of 25.2% that it will take at least 1000 rounds to recover from -500 BTC, which means probability of 74.8% to recover within 1000 rounds. While m=20, n=1000 gives a probability of 50.4% for at least 1000 rounds, meaning only 49.6% probability of less than 1000 rounds.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
For example, over a period of n rounds the buffer will change in an amount on the order of sqrt(n) times the block reward. The probability that it will take at least n rounds to recover from a negative buffer of m times the block reward is roughly (m/sqrt(n))*sqrt(2/Pi). (From which it follows that the expected time to recovery is infinite.)

I'm not sure I follow this - if m is large and n is small, you get a large probability of the negative buffer recovering quickly. Can you provide an example for me?
donator
Activity: 2058
Merit: 1054
Great work Meni. As readable and informative as ever. And also nice to know that the crazy huge negative buffers I seemed to get in simulation aren't a bug but a feature. Over hundreds of thousands of simulated rounds the buffers go up and down like a bride's nightie, but the time taken to get out of a negative buffer hole can be hundreds of rounds.
Yes, Brownian motion is pretty crazy but quite a lot can be said about its dynamics. For example, over a period of n rounds the buffer will change in an amount on the order of sqrt(n) times the block reward. The probability that it will take at least n rounds to recover from a negative buffer of m times the block reward is roughly (m/sqrt(n))*sqrt(2/Pi). (From which it follows that the expected time to recovery is infinite.)

Starting with a very large positive buffer (say 1000 time the bitcoin reward for a block) can make it much less likely that an SMPPS pool will go in to negative buffer in the short to medium term, but the more rounds I simulate the greater the changes in buffer can be. Even if the pool knows it might go into positive at soe later point, can it survive until then?
The pool has no problem surviving as long as its buffer is high - in this case the distinction between it and PPS are blurred. If the buffer is currently positive but on a more earthly level, people are signing up for a pool which is by design doomed to failure (even if it will take a while for the failure to actually take place).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Great work Meni. As readable and informative as ever. And also nice to know that the crazy huge negative buffers I seemed to get in simulation aren't a bug but a feature. Over hundreds of thousands of simulated rounds the buffers go up and down like a bride's nightie, but the time taken to get out of a negative buffer hole can be hundreds of rounds.

Starting with a very large positive buffer (say 1000 time the bitcoin reward for a block) can make it much less likely that an SMPPS pool will go in to negative buffer in the short to medium term, but the more rounds I simulate the greater the changes in buffer can be. Even if the pool knows it might go into positive at soe later point, can it survive until then?

Anyway, nice to know the reasons behind the observations. Thanks!
donator
Activity: 2058
Merit: 1054
Chapter 4 (*MPPS) is done.
donator
Activity: 2058
Merit: 1054
Chapter 3 is complete.
donator
Activity: 2058
Merit: 1054
The geometric method is basically like slush's method but

1. With share-based decay rather than time-based.
2. With operator score to always maintain a steady state.

And a moment of satori for free. I followed your explanation on the thread but I don't think i really understood it until just then.
Yeah, I tried, perhaps unsuccessfully, to get across the idea that the 1/(r-1) operator score in the method represents infinitely many shares submitted by the operator before the round starts. So at any point a miner who wants to submit a share sees behind him an infinite sequence of shares, so the competition with existing shares is always the same. That is, in the beginning of the round, the past, present and future shares look like this:
..., r^-5, r^-4, r^-3, r^-2, r^-1, (r^0), r^1, r^2, r^3, r^4, r^5, ...
10 shares into the round, it looks like this:
..., r^5, r^6, r^7, r^8, r^9, (r^10), r^11, r^12, r^13, r^14, r^15, ...
This is exactly like the previous case, but with everything scaled by a factor of r^10 which doesn't matter. Thus the statistical properties of the payout for a submitted share are always the same.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
The geometric method is basically like slush's method but

1. With share-based decay rather than time-based.
2. With operator score to always maintain a steady state.

And a moment of satori for free. I followed your explanation on the thread but I don't think i really understood it until just then.

Quote
And there's no point in a lookup table, the calculation isn't expensive

You're right of course - I was looking at it from a simulation pov where a table if is needed if you want to simulate 10^7 rounds worth of data in a reasonable time. But in real time, I can see load wouldn't be a significant problem.

Quote
you only need to make it robust against overflows.

I'm changing to using the R package Brobdingnag which does some nifty log conversions behind the scenes so you can continue as if not using logs, specifically for very large (and I suppose very small) numbers. I think it should be a bit more cycle sparing (for me) than going for multiple precision, and less hacky than renormalising every n simulated shares.
donator
Activity: 2058
Merit: 1054
Yes, I meant during a round and you answered my next question too.

Even if a score = score + exp(n/constant) proportional scoring would still be hoppable, I'd have thought it more stable. A score look-up table for the nth share could then be used instead of a recalculation. I think.
The geometric method is basically like slush's method but

1. With share-based decay rather than time-based.
2. With operator score to always maintain a steady state.

Each of these improvements can be used regardless of the other, and what you're suggesting is basically doing the first without the second. This is an improvement over the current slush method, but the second improvement is more significant, so if you're thinking about these things you may as well just go geometric. (Or PPLNS if you want to decrease variance and are ok with crossing round boundaries and increased maturity time).

And there's no point in a lookup table, the calculation isn't expensive, you only need to make it robust against overflows.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Yes, I meant during a round and you answered my next question too.

Even if a score = score + exp(n/constant) proportional scoring would still be hoppable, I'd have thought it more stable. A score look-up table for the nth share could then be used instead of a recalculation. I think.
donator
Activity: 2058
Merit: 1054
I find the apparently anti-intellectual stance the tax system implies to be a bit odd for a pool op - you can't just decide, "We decided on 50% as the supposed benefit from pool hopping would be only an additional 30% income" based on incorrect facts and a lack of knowledge. Even more recently the pool ops show a lack of knowledge about the basic facts of mining.
They should lie down before they hurt themselves. Designing reward systems requires a clue, which they have none. Not to mention their arrogant, obnoxious attitude when discussing these matters.

On an unrelated but still topical subject, do you know if anyone ever proposed a dynamic c for Slush scored pools? Both BTCMine and Slush's pool have had about 25% reduced hashrates recently and I wonder if fulltime miners will get tired of increased variance before they get around to changing c.

Since at share n exp(duration/c) == exp(n/(av shares per second for round*c)), wouldn't it be better to keep the average shares per second*c to a constant value? Or would a dynamic c - to keep be too hard to implement? Just an interesting thought.
Are you talking mid-round or between rounds? Between rounds, that's what I tried to suggest here. The real quantity controlling the tradeoff between variance and hoppability is c*hashrate, so to maintain the same tradeoff c should be chosen inversely to the hashrate. I was told that in BTCMine, "of course decay adjusted to lover[sic] value, when pool hashrate grow.", but they didn't specify if this was done automatically.

If c is dynamically changed during the round, this reduces to just doing the right thing and basing decay on number of shares rather than amount of time.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Yes, and there was a proposal on the bitHopper forum about enabling 'trickle mining' to do exactly that.

pool.itzod.ru had a similar tax system, which apparently got quite complicated in order to close loopholes like that one. They are reoprtedly PPLNS now.

I find the apparently anti-intellectual stance the tax system implies to be a bit odd for a pool op - you can't just decide, "We decided on 50% as the supposed benefit from pool hopping would be only an additional 30% income" based on incorrect facts and a lack of knowledge. Even more recently the pool ops show a lack of knowledge about the basic facts of mining.

On an unrelated but still topical subject, do you know if anyone ever proposed a dynamic c for Slush scored pools? Both BTCMine and Slush's pool have had about 25% reduced hashrates recently and I wonder if fulltime miners will get tired of increased variance before they get around to changing c.

Since at share n exp(duration/c) == exp(n/(av shares per second for round*c)), wouldn't it be better to keep the average shares per second*c to a constant value? Or would a dynamic c - to keep be too hard to implement? Just an interesting thought.
donator
Activity: 2058
Merit: 1054
They define the system as follows:

Quote
If a user participates in less than 50% of the round, their shares will be reduced by 50%, regardless of donation. 50% of the penalty fee will be directed toward the donations account and will be applied to server costs and future monthly contests. The other 50% of the penalty will be removed from the total shares for the round, which will in-hand cause the value of all remaining shares in the round to increase.
How do they define "participate"? Assuming that, as written here, they take the time between first and last shares, this is completely idiotic. You can stay all the way to 43%, and just submit a share once in a while to maintain >50% span. If you've mined for an hour, you need to submit a single share one hour later, then 2 hours, then 4, until the round ends. (You always submit a share a few minutes before twice the time of your last share).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
They define the system as follows:

Quote
If a user participates in less than 50% of the round, their shares will be reduced by 50%, regardless of donation. 50% of the penalty fee will be directed toward the donations account and will be applied to server costs and future monthly contests. The other 50% of the penalty will be removed from the total shares for the round, which will in-hand cause the value of all remaining shares in the round to increase.

And yes, it's an ugly system, and still fails. If they wanted something still proportional with fairly low variance they could have used Slush's score with a large 'c'. Instead the system is designed to seem fair to full time miners, but still leaving quite a hole for hoppers: ~ 2.0 efficiency per hopped round, and hop point of 0.23* difficulty.

I think it might become popular because of its seeming fairness. I'm hoping it won't though. I got a post out on it rather quickly and I'm hoping a few miners will understand the pointlessness of a 50-50 tax.

donator
Activity: 2058
Merit: 1054
Another thing I noticed was that although too low a 'c' increases variance greatly, the expected payout for submitting at share at any time approaches 1.0 as c decreases. My explanation was that as c decreases for a given hashrate, reward and variance approache solo mining. Correct?
Exactly. Basically when c=0 each share is infinitely more valuable than the last, so only the winning share is rewarded, which is essentially solo.

Also bitcoinpool (and I think polmine, not sure) recently decided to do a 'tax' on pool hoppers by still scoring proportionally and then taxing pool hoppers - by 50% for bitcoinpool. I'd like to see an analysis of this if you have time - and if you think it will become a common idea amongst pools. Simulations just show a reduced hop point and reduced efficiency for the round compared to proportional but still much better than a slush scored pool with any c value under 800 (for 1820 Ghps).
I hope this ugly, unfair patching won't become popular. I might write about it at some point. How do they determine who is a hopper?
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Meni, nice work on Slush style pools. I've almost finished a blog series looking at Slush and score from a simulation pov, and I got qualitatively the same results as you. Always nice to see someone explain clearly what a simulation shows. Another thing I noticed was that although too low a 'c' increases variance greatly, the expected payout for submitting at share at any time approaches 1.0 as c decreases. My explanation was that as c decreases for a given hashrate, reward and variance approache solo mining. Correct?

Also bitcoinpool (and I think polmine, not sure) recently decided to do a 'tax' on pool hoppers by still scoring proportionally and then taxing pool hoppers - by 50% for bitcoinpool. I'd like to see an analysis of this if you have time - and if you think it will become a common idea amongst pools. Simulations just show a reduced hop point and reduced efficiency for the round compared to proportional but still much better than a slush scored pool with any c value under 800 (for 1820 Ghps).

Thanks again for the hard work you put into this paper.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
BTCguild with their 1 hour delay on stats makes the simplest PP system practically hopper proof.
I doubt that, hoppers are just too lazy to hop it properly.

It's true. You can still hop it in exactly the same manner as before, it just is less effective. To make proportional hopper proof you need a delay that is longer than a round could ever conceivably be.



Well, one could delay the stats by 3 hours which on BTCguild would cover the majority of rounds(at this difficulty)
but since there are very many rounds in the area of an hour  it basically keeps most hoppers away.
Maybe if/when there are no more other good PP pools the hoppers would work harder and try to hop there as well.

bitHopper allows hopping using LP. It has been successful for me. Almost on par with normal prop pools, and better than using btcguild with the delay.
Pages:
Jump to: