Pages:
Author

Topic: Geometric method: New cheat-proof mining pool scoring method - page 2. (Read 24426 times)

sr. member
Activity: 406
Merit: 250
I could really use the help getting this setup on my opensource pool frontend.

It's based on mysql, php and pushpool.

Any help would be greatly appreciated, as well as benefit the community with an opensource solution for their own implementation.
donator
Activity: 2058
Merit: 1054
Ok, for 5 that may be my posting error. 5 is current round and I posted totalscore earlier in the day and posted share count later.
Here it is in an atomic select
totalscore: 997.23786939
shares: 431965
Ok, that's better. There's still an error in the 3rd decimal place which I don't think should happen.


As for round 2, can the difficulty change account for that? r gets adjusted and subsequent scores also. The above reported r value for round 2 is that at the end of the round, but not so for the duration.
Yeah, that could be it. If you happen to know at which share the difficulty changed I'll be able to verify it (assuming everything worked as specified for that round).
full member
Activity: 140
Merit: 100
Hi again,

Hopefully I have the payment formulas of your method right. I am wondering what the round duration is before the operator loses money with an expected fee of 0 at c=0.001.
After 4 rounds, I have lost on all even those of relatively short duration. Since I have a miner as well, it doesn't really matter that much. Just wanted to double-check I am scoring correctly.
Looks ok, but it will help if you also post the number of shares on each round.

To profit in a round you need it to have less than 3,000 shares (which happens with probability 1/140). So that's fairly rare, but in those cases you do, you'll make on average about 1/7 of the block reward so it evens out. The variance is still not too bad because even when you lose, you don't lose too much.

Thanks, as long as it looks ok, I am content with the variance.

For completeness though:
select round.id,count(*) from round inner join share on round.id=round_id
where score is not null
group by round.id
order by round.id;
id|count
1|83410
2|275961
3|161084
4|281610
5|412581
I'm finding some inconsistencies between the reported ltotalscore and what it should be based on the number of shares. For rounds 3 and 4 the error is only in the 2nd decimal place so it's tolerable. But for rounds 2 and 5, the total score is as if the round lasted 307990 and 360808 shares, respectively.
Ok, for 5 that may be my posting error. 5 is current round and I posted totalscore earlier in the day and posted share count later.
Here it is in an atomic select
totalscore: 997.23786939
shares: 431965

As for round 2, can the difficulty change account for that? r gets adjusted and subsequent scores also. The above reported r value for round 2 is that at the end of the round, but not so for the duration.
donator
Activity: 2058
Merit: 1054
Hi again,

Hopefully I have the payment formulas of your method right. I am wondering what the round duration is before the operator loses money with an expected fee of 0 at c=0.001.
After 4 rounds, I have lost on all even those of relatively short duration. Since I have a miner as well, it doesn't really matter that much. Just wanted to double-check I am scoring correctly.
Looks ok, but it will help if you also post the number of shares on each round.

To profit in a round you need it to have less than 3,000 shares (which happens with probability 1/140). So that's fairly rare, but in those cases you do, you'll make on average about 1/7 of the block reward so it evens out. The variance is still not too bad because even when you lose, you don't lose too much.

Thanks, as long as it looks ok, I am content with the variance.

For completeness though:
select round.id,count(*) from round inner join share on round.id=round_id
where score is not null
group by round.id
order by round.id;
id|count
1|83410
2|275961
3|161084
4|281610
5|412581
I'm finding some inconsistencies between the reported ltotalscore and what it should be based on the number of shares. For rounds 3 and 4 the error is only in the 2nd decimal place so it's tolerable. But for rounds 2 and 5, the total score is as if the round lasted 307990 and 360808 shares, respectively.
full member
Activity: 140
Merit: 100
Hi again,

Hopefully I have the payment formulas of your method right. I am wondering what the round duration is before the operator loses money with an expected fee of 0 at c=0.001.
After 4 rounds, I have lost on all even those of relatively short duration. Since I have a miner as well, it doesn't really matter that much. Just wanted to double-check I am scoring correctly.
Looks ok, but it will help if you also post the number of shares on each round.

To profit in a round you need it to have less than 3,000 shares (which happens with probability 1/140). So that's fairly rare, but in those cases you do, you'll make on average about 1/7 of the block reward so it evens out. The variance is still not too bad because even when you lose, you don't lose too much.

Thanks, as long as it looks ok, I am content with the variance.

For completeness though:
select round.id,count(*) from round inner join share on round.id=round_id
where score is not null
group by round.id
order by round.id;
id|count
1|83410
2|275961
3|161084
4|281610
5|412581
donator
Activity: 2058
Merit: 1054
Hi again,

Hopefully I have the payment formulas of your method right. I am wondering what the round duration is before the operator loses money with an expected fee of 0 at c=0.001.
After 4 rounds, I have lost on all even those of relatively short duration. Since I have a miner as well, it doesn't really matter that much. Just wanted to double-check I am scoring correctly.
Looks ok, but it will help if you also post the number of shares on each round.

To profit in a round you need it to have less than 3,000 shares (which happens with probability 1/140). So that's fairly rare, but in those cases you do, you'll make on average about 1/7 of the block reward so it evens out. The variance is still not too bad because even when you lose, you don't lose too much.
full member
Activity: 140
Merit: 100
Hi again,

Hopefully I have the payment formulas of your method right. I am wondering what the round duration is before the operator loses money with an expected fee of 0 at c=0.001.
After 4 rounds, I have lost on all even those of relatively short duration. Since I have a miner as well, it doesn't really matter that much. Just wanted to double-check I am scoring correctly.

Here is output on the production pool, note that round 5 is current.
All scores (os, totalscore) are stored in log scale (los = os, ltotalscore = totalscore) I just didn't rename the fields.

select round.id,max(time)-min(time) as duration,
f,c,b,os,totalscore,
cast(f*b as numeric(22,8)) as fixed_fee,
cast((1-f)*b*exp_or_zero(os-totalscore) as numeric(22,8)) as var_fee
from share inner join round on round.id = round_id
group by round.id,f,c,b,os,totalscore
order by round.id;

id|duration|f|c|b|os|totalscore|fixed_fee|var_fee
1|1 day 04:37:44.14842|-0.001001001001001|0.001|50.00050000|5.4987402081296|440.583124446139|-0.05005055|0.00000000
2|2 days 21:09:25.495721|-0.001001001001001|0.001|50.16350000|6.07607688989912|712.769182213441|-0.05021371|0.00000000
3|23:37:05.5112|-0.001001001001001|0.001|50.081|6.07607688989912|375.691503438233|-0.05013113|0.00000000
4|1 day 22:36:46.451189|-0.001001001001001|0.001|50.04050000|6.07607688989912|652.242820446803|-0.05009059|0.00000000
5|2 days 00:42:35.360026|-0.001001001001001|0.001|50|6.07607688989912|833.963234742525|-0.05005005|0.00000000

exp_or_zero function is a wrapper on exp() which returns 0 in case of an underflow. It does so only in the current round which is a bit long.
full member
Activity: 140
Merit: 100
The calculations are correct. 0.001 (actually it's closer to 0.002) is the expectation, while 1.73 is what it would be if it's very lucky and the round ends now (remember, on average 430000 more shares will be introduced before the round ends).
Come to think of it, unless you make c higher (decreasing variance for participants), I don't know if the expectation for already submitted shares is such a useful measure.
Can you tell me the hashrate of this worker and how long it has mined before this evaluation?
That particular worker was my own and it mined at around 330 mhash that round. The round lasted 23 hours, 37 minutes and I would estimate the average of the pool at around 7-8 gh/s. So yes, definitely a short round. I have no long rounds to analyze yet though.
donator
Activity: 2058
Merit: 1054
Quote
In fact it's arguably even simpler. For this you don't need to keep ltotalscore. To find the expected payout of a worker for the current round, do

select (1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))

p should be calculated based on the current difficulty.
Hmm, unless I am doing something wrong, this is throwing out very small expected values per worker. Much lower than the current balance calculation. < 0.001 BTC.
Is it possible you've used lastscore instead of lastlscore?

If that's not it, can you post the values of

rd.f
rd.c
p
rd.b
lastlscore
sum(exp(lscore-lastlscore))
(1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))
f = -0.001001...
c = 0.001
lastlscore = 369.6131
sum(exp(lscore-lastlscore)) = 15.0887
(1-f)*(1-c)*p*b*sum
= 0.001

sum(exp(lscore-ltotalscore))
0.0345
(1-f)*b*0.0345
= 1.73

The calculations are correct. 0.001 (actually it's closer to 0.002) is the expectation, while 1.73 is what it would be if it's very lucky and the round ends now (remember, on average 430000 more shares will be introduced before the round ends).
Come to think of it, unless you make c higher (decreasing variance for participants), I don't know if the expectation for already submitted shares is such a useful measure.
Can you tell me the hashrate of this worker and how long it has mined before this evaluation?
full member
Activity: 140
Merit: 100
Quote
In fact it's arguably even simpler. For this you don't need to keep ltotalscore. To find the expected payout of a worker for the current round, do

select (1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))

p should be calculated based on the current difficulty.
Hmm, unless I am doing something wrong, this is throwing out very small expected values per worker. Much lower than the current balance calculation. < 0.001 BTC.
Is it possible you've used lastscore instead of lastlscore?

If that's not it, can you post the values of

rd.f
rd.c
p
rd.b
lastlscore
sum(exp(lscore-lastlscore))
(1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))
f = -0.001001...
c = 0.001
lastlscore = 369.6131
sum(exp(lscore-lastlscore)) = 15.0887
(1-f)*(1-c)*p*b*sum
= 0.001

sum(exp(lscore-ltotalscore))
0.0345
(1-f)*b*0.0345
= 1.73
donator
Activity: 2058
Merit: 1054
Quote
In fact it's arguably even simpler. For this you don't need to keep ltotalscore. To find the expected payout of a worker for the current round, do

select (1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))

p should be calculated based on the current difficulty.
Hmm, unless I am doing something wrong, this is throwing out very small expected values per worker. Much lower than the current balance calculation. < 0.001 BTC.
Is it possible you've used lastscore instead of lastlscore?

If that's not it, can you post the values of

rd.f
rd.c
p
rd.b
lastlscore
sum(exp(lscore-lastlscore))
(1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))
full member
Activity: 140
Merit: 100
Good. To convert to log scale:
1. Instead of storing r and lastscore, store lr = log(r) and lastlscore = the lscore of the last submitted share.
2. On difficulty change, update round set lastlscore=lastlscore+log(p2/p1).
Got it, thanks.

Quote
In fact it's arguably even simpler. For this you don't need to keep ltotalscore. To find the expected payout of a worker for the current round, do

select (1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))

p should be calculated based on the current difficulty.
Hmm, unless I am doing something wrong, this is throwing out very small expected values per worker. Much lower than the current balance calculation. < 0.001 BTC.
donator
Activity: 2058
Merit: 1054
To understand the algorithm better, I tried to translate the SQL code from this thread into a more general pseudocode format.  I decided to post it here in case it helps anyone else to understand how the scoring algorithm works.

Note that it does not include the difficulty adjustment calculation as recently discussed.
Note that to help understand how the scoring algorithm works, you would need to post it in linear scale.

Since you wrote it in log scale, it really serves the purpose of helping understand how to implement it in a numerically robust way.
newbie
Activity: 15
Merit: 0
To understand the algorithm better, I tried to translate the SQL code from this thread into a more general pseudocode format.  I decided to post it here in case it helps anyone else to understand how the scoring algorithm works.

Note that it does not include the difficulty adjustment calculation as recently discussed.

Code:
let f := 
let c :=
let b := 50 // total block payout

add_share:
let d := most recent difficulty
let p := 1.0 / d

if first round:
let lr := log(1.0 - p + p / c)
let ls := 0
create round:
round.los := log(1 / (exp(lr) - 1))
round.f := f
round.c := c
round.b := b

else with latest round:
let lr := log(1.0 - p + p / round.c)
if first share:
let ls := 0
else with latest share:
let ls := share.lscore + lr

create share:
share.lscore := ls


calculate_payout:
let round := most recent round

with latest share:
let max := share.lscore
with all shares:
let totscore := sum(exp(share.lscore - max)) + exp(round.los - max)

for share in all shares grouped by worker:
let credit := (1 - round.f) * round.b * sum(exp(share.lscore - max)) / totscore
donator
Activity: 2058
Merit: 1054
Ouch, you've done this wrong. In my specification I meant that the scores for existing shares will remain the same, while the variable s that specifies what will be the score for new shares decreases (multiplied by p2/p1). Equivalently, since it's all relative, you could fix s and multiply all scores by p1/p2. But what you've done has two problems:
1. You've multiplied existing scores by p2/p1 instead of p1/p2.
2. Since you don't keep s as a separate variable but rather read it from the shares table, you have scaled both the existing shares and the new shares, so the difficulty fix will actually have no effect whatsoever.
Basically what you want is that the score for the first share submitted after the difficulty change will be multiplied by p2/p1. Then proceed as normal, each share will reference the last one and it will be ok. I don't know what's the DB-friendly way to do that.

Once you show me your solution, I'll adapt it to log scale.
Ok, I've refactored the PLPG functions. Now storing r and lastscore in round tbl so difficulty change can just update round set lastscore=lastscore*(p2/p1) and the next insert should do the right thing.
Good. To convert to log scale:
1. Instead of storing r and lastscore, store lr = log(r) and lastlscore = the lscore of the last submitted share.
2. On difficulty change, update round set lastlscore=lastlscore+log(p2/p1).

I assume you want to find the balance mid-round? The above will give you what would be the reward if the round ended now. I think it will be more useful to know what is the expected reward. This requires a small tweak, let me know if you are interested in it.
Sure, that would probably be a more useful measure.
In fact it's arguably even simpler. For this you don't need to keep ltotalscore. To find the expected payout of a worker for the current round, do

select (1-rd.f)*(1-rd.c)*p*rd.b*sum(exp(lscore-lastlscore))

p should be calculated based on the current difficulty.
donator
Activity: 2058
Merit: 1054
Don't want to butt in here guys but how does this bit work?

Quote
update share set score=ln(exp((max-score)*(p2/p1)))?
This bit doesn't work (note that martok only asked if it would work). I will soon supply the correct formula.

Isn't that like doing sin(asin(f(x))) or a cos(acos(X))
It is.

... or is it to round the numerical approximation off or some such?
It can have a numerical effect, but only for the worse. It can introduce a small error, and if the exp over/under flows you have a problem.
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Don't want to butt in here guys but how does this bit work?

Quote
update share set score=ln(exp((max-score)*(p2/p1)))?

Isn't that like doing sin(asin(f(x))) or a cos(acos(X)) ... or is it to round the numerical approximation off or some such?
full member
Activity: 140
Merit: 100
I assume you want to find the balance mid-round? The above will give you what would be the reward if the round ended now. I think it will be more useful to know what is the expected reward. This requires a small tweak, let me know if you are interested in it.

Sure, that would probably be a more useful measure.
full member
Activity: 140
Merit: 100
Ouch, you've done this wrong. In my specification I meant that the scores for existing shares will remain the same, while the variable s that specifies what will be the score for new shares decreases (multiplied by p2/p1). Equivalently, since it's all relative, you could fix s and multiply all scores by p1/p2. But what you've done has two problems:
1. You've multiplied existing scores by p2/p1 instead of p1/p2.
2. Since you don't keep s as a separate variable but rather read it from the shares table, you have scaled both the existing shares and the new shares, so the difficulty fix will actually have no effect whatsoever.
Basically what you want is that the score for the first share submitted after the difficulty change will be multiplied by p2/p1. Then proceed as normal, each share will reference the last one and it will be ok. I don't know what's the DB-friendly way to do that.

Once you show me your solution, I'll adapt it to log scale.
Ok, I've refactored the PLPG functions. Now storing r and lastscore in round tbl so difficulty change can just update round set lastscore=lastscore*(p2/p1) and the next insert should do the right thing.
donator
Activity: 2058
Merit: 1054
I have added a correctness proof outline to the OP.
Pages:
Jump to: