Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 340. (Read 2591964 times)

member
Activity: 85
Merit: 10
Time for a BLOCK PARTY!!!!
legendary
Activity: 1540
Merit: 1001
Does p2pool force the clean jobs when a share is found (should be 30 seconds due to share time)?  So wouldn't not forcing that, just result is lower node efficiency through more DOA's to the sharechain?  This would be why eligius doesn't need to force a restart everytime it sends new work, just every time a bitcoin block is found (7 to 10 minutes).  And if this is truly where the error, I could be wrong, but wouldn't that mean that you are hashing at the full hash rate, but it's just getting reported wrong??

I think your logic is correct.  Clean jobs has to be submitted when a new share is on the chain.

If the S2 was truly hashing at the full rate, at least one place would show it.  But every place shows it low: p2pool, the S2 web UI, and the S2 LCD.

M
legendary
Activity: 1540
Merit: 1001
This may seem like a simplistic approach, but if you're getting a whole lot of work submitted after the pool says "drop everything", couldn't you mitigate that by setting the queue depth to 0 in cgminer on the S2?

Good idea.  I haven't tried that yet.  I think that's reachable via the api...

setconfig|queue,0

M
legendary
Activity: 1540
Merit: 1001
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".

If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

Here are 2 patches for p2pool that address point 2 and 3 from your list.

First a simple patch that allow higher than 1000 pseudo-difficulty.
https://github.com/jaketri/p2pool/commit/05b630f2c8f93b78093043b28c0c543fafa0a856

And another patch that add "--min-difficulty" parameter to p2pool. For my setup I use 256 as start pseudo difficulty.

https://github.com/jaketri/p2pool/commit/5f02f893490f2b9bfa48926184c4b1329c4d1554

Thanks, that's good.  Now why isn't it in the main code?

M
sr. member
Activity: 543
Merit: 250
Orjinal üyelik ToRiKaN banlanalı asır ol
Only way is to change poll code and hide fee.
P2pool idea is to run own node and use it Smiley
Use others only as backup.


Not everyone can/want's to run their own node.  It would be cool though if they could use someone else's but I was wondering if there is some way to prove that it is fair.

Neil

I don't know if there is anyway of proving any pool is really fair.  To that end, what would stop a pool op from saying 0 fee or a 1% fee and then taking all the profits.  In the end its all about what you feel comfortable with.  If, say you wanted to see the true node fee (barring them changing the coding behind p2pool) you can go to the node/fee and it is published there.  For example, the node I have setup, and will be returning to mining on once I get my two addresses into eligius's payout queue, is http://mine.njbtcmine.net:9332/fee.

FYI, that node is located in Newark, NJ if anyone is looking for a node to mine on.

Edit: Someone correct me if I'm wrong, but the fee to P2Pool is in shares too, isn't it?  So you'd notice your estimated payout adversely effected if someone did edit the code to hide the fee and take 100%, or something stupid like that.
legendary
Activity: 896
Merit: 1000
Only way is to change poll code and hide fee.
P2pool idea is to run own node and use it Smiley
Use others only as backup.


Not everyone can/want's to run their own node.  It would be cool though if they could use someone else's but I was wondering if there is some way to prove that it is fair.

Neil
member
Activity: 67
Merit: 10
where can i learn pool tricks, very essential.
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Only way is to change poll code and hide fee.
P2pool idea is to run own node and use it Smiley
Use others only as backup.
legendary
Activity: 896
Merit: 1000
I have a question, and it may have already been answered and my google-fu is off.

I can think of a few ways a dishonest p2pool operator can 'cheat' users of the pool (charging fee's while saying your not etc...).  Has anyone thought or implemented ways to detect cheating?

Nel
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
This may seem like a simplistic approach, but if you're getting a whole lot of work submitted after the pool says "drop everything", couldn't you mitigate that by setting the queue depth to 0 in cgminer on the S2?
sr. member
Activity: 543
Merit: 250
Orjinal üyelik ToRiKaN banlanalı asır ol
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M

Does p2pool force the clean jobs when a share is found (should be 30 seconds due to share time)?  So wouldn't not forcing that, just result is lower node efficiency through more DOA's to the sharechain?  This would be why eligius doesn't need to force a restart everytime it sends new work, just every time a bitcoin block is found (7 to 10 minutes).  And if this is truly where the error, I could be wrong, but wouldn't that mean that you are hashing at the full hash rate, but it's just getting reported wrong??
full member
Activity: 154
Merit: 100
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".

If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

Here are 2 patches for p2pool that address point 2 and 3 from your list.

First a simple patch that allow higher than 1000 pseudo-difficulty.
https://github.com/jaketri/p2pool/commit/05b630f2c8f93b78093043b28c0c543fafa0a856

And another patch that add "--min-difficulty" parameter to p2pool. For my setup I use 256 as start pseudo difficulty.

https://github.com/jaketri/p2pool/commit/5f02f893490f2b9bfa48926184c4b1329c4d1554
legendary
Activity: 1540
Merit: 1001
Also, I also noticed p2pool would complain a little bit about shares being submitted over difficulty every time p2pool changed the pseudo share size.  It even did this when I had the psuedo share size forced to the highest value (1000) by appending +1000 to my address!  It also happened when I override the share size on the proxy side to something larger than p2pool wanted.  I didn't check the math of the shares, so it could be an Ant problem, or it could be a p2pool problem.

M
legendary
Activity: 1540
Merit: 1001
I'd suggest that perhaps Bitmain or Kano could probably fix the Ants. Bitmain because they should, Kano because he's awesome Wink

From your findings it sounds like fixing the ants so they reply to work restarts properly would fix the problem:

Quote
I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.

The more frequent work restarts are a direct result of p2pools higher share rate...

The problem is the Ants are hardware.  There's a finite amount of stuff that can be done to them through software.  If it's a hardware issue, then I'd be surprised if the S3s work with p2pool.  If it's software, I don't understand why Bitmain hasn't fixed it.  Surely they could do what I did to see what's going on.

Quote
I found a hack that might get my proxy to work S2s and p2pool. I always set the jobid in the work submitted to the current jobid
Quote
I'm not sure of the impact this may have, I would think it would break the share header?

Doing that fixed the absurd amount of rejects I was getting.  However I still had the decreased hashrate problem.  Because of the current true share size, my 2.2th/s would take an average of 2 hours to find a share.  I didn't let it chug long enough to see if I regularly found shares, or if I got a larger than average amount of dead shares.  I'm not going to throw away a day of mining to experiment with it. Sad

M
legendary
Activity: 1258
Merit: 1027
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M

Dude, GREAT WORK!

I'd suggest that perhaps Bitmain or Kano could probably fix the Ants. Bitmain because they should, Kano because he's awesome Wink

From your findings it sounds like fixing the ants so they reply to work restarts properly would fix the problem:

Quote
I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.

The more frequent work restarts are a direct result of p2pools higher share rate...

Quote
I found a hack that might get my proxy to work S2s and p2pool. I always set the jobid in the work submitted to the current jobid

I'm not sure of the impact this may have, I would think it would break the share header?

From the wiki:
Quote
P2Pool shares form a "sharechain" with each share referencing the previous share's hash. Each share contains a standard Bitcoin block header, some P2Pool-specific data that is used to compute the generation transaction (total subsidy, payout script of this share, a nonce, the previous share's hash, and the current target for shares), and a Merkle branch linking that generation transaction to the block header's Merkle hash.

but if it works it sounds like a good short term solution Smiley
legendary
Activity: 1258
Merit: 1027
I have 4 S1's with kano's update applied, what other asic's would be recommended for p2pool?

I'd suggest this list is incomplete at best, but its a start: https://en.bitcoin.it/wiki/P2Pool#Interoperability_table

SP10 needs to be added.
full member
Activity: 161
Merit: 100
digging in the bits... now ant powered!
I have 4 S1's with kano's update applied, what other asic's would be recommended for p2pool?
legendary
Activity: 1540
Merit: 1001
At the moment my conclusion is a proxy isn't going to help getting S2s working with p2pool.

I believe the problem is the constant "drop EVERYTHING you are doing and restart" messages that p2pool sends every 30 seconds.  Then it rejects all the old work coming p2pool leading to large DOA.

My recommendations for p2pool to become more S2 friendly:

1 - Sending new work every 30 seconds is fine.  It's apparently not necessary to force the miner to restart "clean jobs" = true.  It also needs to gracefully accept work from prior job IDs for a period of time like other pools do.

2 - Start at a higher pseudo difficulty than 1.

3 - Allow a higher pseudo difficulty than 1000.

4 - Send the actual errors back for rejected work, instead of just "false".


If I understood python I'd take a stab at 2 through 4.  1 might be a big deal.

M
full member
Activity: 932
Merit: 100
arcs-chain.com
P2pool expected sharetime should be higher than 30s..
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
Very interesting mdude - maybe drop kano a line? He'd know. It's all a bit above me I'm afraid.......
Jump to: