Pages:
Author

Topic: Double geometric method: Hopping-proof, low-variance reward system - page 7. (Read 75585 times)

vip
Activity: 980
Merit: 1001
Ozcoin Pooled Mining now offers DGM  Grin

Thanks Meni  Cheesy
donator
Activity: 2058
Merit: 1054
Perhaps you can just take the payout from the formula today, divide it by 50 and multiple with the actual income.
It needs to be the potential income when the share was submitted, not the received income when a block is found.
I'am looking for a way to determine the estimated block reward B when a share (which is not solving the block) was submitted. Is that even possible?

A possible solution i have in mind: we increase the user score for every submitted share on block x as soon as block x was solved and we know exactly how much B was.

Is that's the way it should be done?
That's exactly the thing I said shouldn't be done. The correct way is much simpler, it's to use the block reward of the share itself. That is, the share is a hash of a block header you hand out for the getwork, which has a merkle root of a transaction list with a given total transaction fee, which together with generation is the total block reward. That's the value of B which should be used for this share.
full member
Activity: 142
Merit: 100
Perhaps you can just take the payout from the formula today, divide it by 50 and multiple with the actual income.
It needs to be the potential income when the share was submitted, not the received income when a block is found.

I'am looking for a way to determine the estimated block reward B when a share (which is not solving the block) was submitted. Is that even possible?

A possible solution i have in mind: we increase the user score for every submitted share on block x as soon as block x was solved and we know exactly how much B was.

Is that's the way it should be done?
legendary
Activity: 1260
Merit: 1000
Quote
2. Proofs of work must be handled sequentially. This could impede performance. It could also make it difficult to have multiple backend servers spread around the world for redundancy.

This was my conclusion as well when coding the backends for EMC.  However, there are ways to account for that by storing shares and calculating them according to time submitted.  Unless your hashrate is in the multiple TH/s range, accommodating multiple back ends is not too difficult.
donator
Activity: 2058
Merit: 1054
This is a very interesting method. But there are two aspects I am unsure about:

1. The reward is assumed constant. After some time fewer and fewer coins will be minted and transaction fees will be a larger part of the income. Eventually the only income is transaction fees. At some point variable income will be a must.
Most of the analysis assumes constant block reward, but the method doesn't require it. When you calculate the score for a share, you use the block reward in this share (viewed as a potential block). This guarantees fair expected return, but adds some variance to the operator.

Perhaps you can just take the payout from the formula today, divide it by 50 and multiple with the actual income.
It needs to be the potential income when the share was submitted, not the received income when a block is found.

2. Proofs of work must be handled sequentially. This could impede performance. It could also make it difficult to have multiple backend servers spread around the world for redundancy.
I don't know much about the technicalities of this, but from an algorithmic standpoint, having shares go slightly out of sync only causes a negligible error in the payouts.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
This is a very interesting method. But there are two aspects I am unsure about:

1. The reward is assumed constant. After some time fewer and fewer coins will be minted and transaction fees will be a larger part of the income. Eventually the only income is transaction fees. At some point variable income will be a must.

Perhaps you can just take the payout from the formula today, divide it by 50 and multiple with the actual income.

2. Proofs of work must be handled sequentially. This could impede performance. It could also make it difficult to have multiple backend servers spread around the world for redundancy.
donator
Activity: 2058
Merit: 1054
Is your geo method and this double geo method essentially PPLNS but N is the number of shares in the last x hours?
Absolutely not.

Time never plays a factor in a hopping-proof method.

To clarify the picture, all these methods fall into a spectrum with two axes:

1. Decay function: 0-1 (step function), exponential, linear etc.
2. Block finding effect: No effect, resetting the round completely, or something in between.

PPLNS is 0-1 with no effect for block finding. Geom is exponential with complete round reset. Double geom is exponential with a partial effect of block finding, the magnitude of which is controlled with a parameter.

Can you please graph the decay rate?
It's exponential decay, you can see a graph of how that looks like here. The exact decay rate depends on the parameters. With double geom it's more complicated because there is also decay when blocks are found, but that's also exponential in nature.
full member
Activity: 182
Merit: 100
Is your geo method and this double geo method essentially PPLNS but N is the number of shares in the last x hours?

Can you please graph the decay rate?
donator
Activity: 2058
Merit: 1054
How does this deal with variable or unknown block rewards? Already now block rewards are rarely 50 BTC and the endgame scenario is anyways random block rewards (which should just be profitable).
It doesn't. For now I recommend (for all reward systems) that the operator keeps transaction fees, and take this into account in his business plan when he decides what pool fees to take. I'll investigate this more thoroughly as it becomes a bigger problem.
On second thought, it is trivial to deal with variable block rewards - at least, in a framework such as this which allows operator variance, and as far as hopping-proofness and expectation is concerned. Analysis of things like variance is harder though. I've changed the wording to be more friendly to incorporating this - basically the score given to a share should be based on the block reward at the time of submitting it.


In other news, I've thought of a new framework which includes as special cases double geometric, 0-1 and linear PPLNS, and their extension to operator variance. Some parts of this I've known for months, others for weeks and a key part I've just thought about. I think in particular the extension of linear PPLNS will give the ultimate variance tradeoff (though it's not pure score-based). I'm so excited... And yet I really need to work on Bitcoil now. So much to do, so little time...
donator
Activity: 2058
Merit: 1007
Poor impulse control.
As far as efficiency goes, it's the variation in efficiency I want to simulate, which I think would really be the stand out reason to use this method over other hopper proof methods.
By this, do you mean simply the variance of the reward? In this case you need parameters that are realistic with respect to the risk operators are willing to take, but for an operator who is already seriously considering offering PPS, I think these parameters are entirely viable (perhaps with a slightly higher f if he wants to gain on average).

It wont really be the statistical parameter 'variance' I'm recording in simulation, just the variation in the reward for this system compared to baseline pps, run for a number of rounds equivalent to days or weeks. But yes, it would record something similar to variance, so I think the parameters you suggest will be useful. The plots generated would look similar to the graph below.


donator
Activity: 2058
Merit: 1054
As far as efficiency goes, it's the variation in efficiency I want to simulate, which I think would really be the stand out reason to use this method over other hopper proof methods.
By this, do you mean simply the variance of the reward? In this case you need parameters that are realistic with respect to the risk operators are willing to take, but for an operator who is already seriously considering offering PPS, I think these parameters are entirely viable (perhaps with a slightly higher f if he wants to gain on average).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
In order simulate efficiency and the variability of efficiency for this scoring method, what f c and o values would you suggest I use?

Edit: For use in the simulator I define 'efficiency' to be (reward for a hopping miner)/(reward for a fee-free pps miner).
Use f = -1, c = 0.5, o = 0.5. With these parameters this method is the most different from other methods, so if there are any problems they should show up there. Also, with this the average fee is 0, so efficiency will be exactly 1.

Remember that to simulate/implement this, you need to use either logarithmic scale, or periodic rescaling (once in a while (which could be every share if you're only keeping track of a few workers), divide all scores by s and set s to 1).

Thanks Meni, and thanks for the reminder about log scale.

As far as efficiency goes, it's the variation in efficiency I want to simulate, which I think would really be the stand out reason to use this method over other hopper proof methods.

donator
Activity: 2058
Merit: 1054
In order simulate efficiency and the variability of efficiency for this scoring method, what f c and o values would you suggest I use?

Edit: For use in the simulator I define 'efficiency' to be (reward for a hopping miner)/(reward for a fee-free pps miner).
Use f = -1, c = 0.5, o = 0.5. With these parameters this method is the most different from other methods, so if there are any problems they should show up there. Also, with this the average fee is 0, so efficiency will be exactly 1.

Remember that to simulate/implement this, you need to use either logarithmic scale, or periodic rescaling (once in a while (which could be every share if you're only keeping track of a few workers), divide all scores by s and set s to 1).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
In order simulate efficiency and the variability of efficiency for this scoring method, what f c and o values would you suggest I use?

Edit: For use in the simulator I define 'efficiency' to be (reward for a hopping miner)/(reward for a fee-free pps miner).
donator
Activity: 2058
Merit: 1054
How does this deal with variable or unknown block rewards? Already now block rewards are rarely 50 BTC and the endgame scenario is anyways random block rewards (which should just be profitable).
It doesn't. For now I recommend (for all reward systems) that the operator keeps transaction fees, and take this into account in his business plan when he decides what pool fees to take. I'll investigate this more thoroughly as it becomes a bigger problem.

EDIT: As discussed below, this turns out not to be a serious problem. Though variability in block rewards does increase operator risk.

How is maturity time affected?
You should look not at the time until all the reward is obtained, but rather at the average time it is received. So if your reward is 2 BTC, and you receive 1 BTC a million shares from now and 1 BTC two million shares from now, I count it as a maturity time of 1.5M shares. If you consider time to be fully repaid important for reasons of making a clean break from the pool, I have a different suggestion to address this which I will post about later.

When o=1, you can choose r to have whatever tradeoff you want between maturity time and variance. It turns out you get the same tradeoff as with 0-1 cutoff. A good choice is r=1+p, resulting in decay by a factor of e every D (difficulty) shares, with maturity time of 1 (expressed in multiples of D) and variance of (pB)^2/2. This is the same as with PPLNS when N = 2D. Then you can decrease o (and change r accordingly), keeping maturity time constant while decreasing variance of participants (at the cost of increased operator variance).

How exact and how "stable" are payout estimates displayed on a pool website (if I stop mining, how fast do shares decay)?
The decay rate, and the accuracy of the estimates (which is basically the inverse variance), are tunable, and as discussed above, start as good as PPLNS and can then be improved.

How far can the pool in theory go into minus to "absorb some variance"?
The operator can only lose if f is negative, and in this case, can lose up to (-f)B per block found. Compare with PPS, which is equivalent to f -> -Inf and the operator's loss per round is unbounded.

Oh, and it might be great to have a google spreadsheet with example calculations + chart porn! Wink
I'll think about it.
legendary
Activity: 2618
Merit: 1007
How does this deal with variable or unknown block rewards? Already now block rewards are rarely 50 BTC and the endgame scenario is anyways random block rewards (which should just be profitable).

Some comparisons between Geometric, PPLNS and PPS:

Share decay (If I stop mining, how will my expected balance decay):
Geometric: Exponential
PPLNS: Linear
PPS: None

Maturity time (How long after I submit a share will I be able to claim the full reward for this share):
Geometric: after block was found + calculations for payout have finished (+120 confirmations eventually)
PPLNS: N more shares have been submitted after my share (+eventually some confirmations)
PPS: Instant (or until a block has been confirmed, though a PPS pool should already start with a buffer!)

Payout estimates:
Geometric: Only estimates, will only be correct if the next submitted share solves the block - otherwise decay
PPLNS: Can only give "baseline" + probabilities for additional earnings (first baseline = 0 BTC/share, after a block was found [blockreward/N] BTC/share etc.) - gets more exact over time the more shares are being submitted after my share
PPS: exact pre-known price per share

Pool can go in minus if there are some long rounds:
Geometric: As far as I understood it - yes
PPLNS: Never
PPS: Biggest threat to true PPS actually

Your system wants to be somewhere in between Geometric and PPLNS.
How is maturity time affected? How exact and how "stable" are payout estimates displayed on a pool website (if I stop mining, how fast do shares decay)? How far can the pool in theory go into minus to "absorb some variance"?

Oh, and it might be great to have a google spreadsheet with example calculations + chart porn! Wink
donator
Activity: 2058
Merit: 1054
How about just break the block down into 10 minute intervals and thus divide the block total (e.g. 50) by the number of intervals and that fraction divided up over the shares during that interval.
(or even use the network block times themselves)

Anyone want to do the math? Smiley

Does it solve the supposed 43.5% issue mathematically?
Then it's more profitable to mine at the beginning of an interval.
Actually it will create interesting dynamics depending on the number of hoppers. But it's still more profitable to mine early in the round.

don't break rounds, merge 3 rounds and pay 150btc
It's more profitable to mine in the beginning of the 3-round batch, or after 1 or 2 blocks were found if there are relatively few shares.


Anyway, this is off-topic, hopping-proof methods have been known for months, no need to reinvent the square wheel. This post is about a breakthrough in low-variance reward systems.
hero member
Activity: 698
Merit: 500
How about just break the block down into 10 minute intervals and thus divide the block total (e.g. 50) by the number of intervals and that fraction divided up over the shares during that interval.
(or even use the network block times themselves)

Anyone want to do the math? Smiley

Does it solve the supposed 43.5% issue mathematically?

don't break rounds, merge 3 rounds and pay 150btc
sr. member
Activity: 404
Merit: 250
How about just break the block down into 10 minute intervals and thus divide the block total (e.g. 50) by the number of intervals and that fraction divided up over the shares during that interval.
(or even use the network block times themselves)

Anyone want to do the math? Smiley

Does it solve the supposed 43.5% issue mathematically?

I don't think so. It is an interesting proposition though and it made me think some.

The issue is still that it's always better to mine at a pool with fewer intervals, all things being equal, because your shares will still be worth more than they are at a pool with more intervals, but a lower hash rate (so you would get a higher per interval payout).

Not to mention that a 10 minute interval at a place like deepbit is basically an entire round. While at a small pool 10 minute intervals are very few shares.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
How about just break the block down into 10 minute intervals and thus divide the block total (e.g. 50) by the number of intervals and that fraction divided up over the shares during that interval.
(or even use the network block times themselves)

Anyone want to do the math? Smiley

Does it solve the supposed 43.5% issue mathematically?
Pages:
Jump to: