Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 668. (Read 2591920 times)

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Anyhoooo...

Removing the header S-O-L-V-E-D it!  Man, I'm sooo happy! Cheesy

I made your change to my side, installed python 2.7.3 64-bit (windows) and all the associated apps needed for p2pool and fired it up.

The dupe message is definitely gone.

Now, however, cgminer on my main miner (4x7870 = 2.6g/h) is complaining non stop about p2pool not providing work fast enough.  I think if my phoenix miner (4x5870 = 1.6g/h) had a better UI, it would be complaining too.

I'm not sure this is an improvement. Sad

M
However, what this is saying is that your p2pool can't handle a local miner without roll-n-time
So I'd guess your p2pool setup is very slow or has poor network connectivity to your miner since it really should be a local miner talking to a local p2pool and that REALLY should be fast?!?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I would think you're better off making p2pool not roll ntime and leave it to the mining software.
legendary
Activity: 1540
Merit: 1001
Anyhoooo...

Removing the header S-O-L-V-E-D it!  Man, I'm sooo happy! Cheesy

I made your change to my side, installed python 2.7.3 64-bit (windows) and all the associated apps needed for p2pool and fired it up.

The dupe message is definitely gone.

Now, however, cgminer on my main miner (4x7870 = 2.6g/h) is complaining non stop about p2pool not providing work fast enough.  I think if my phoenix miner (4x5870 = 1.6g/h) had a better UI, it would be complaining too.

I'm not sure this is an improvement. Sad

M
sr. member
Activity: 337
Merit: 252
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
PM forrestv about that!
sr. member
Activity: 337
Merit: 252
Do you mind sharing the details of changes?  I see the same issue with my setup from time to time.
Thanks.
In the p2pool directory, look for the file p2pool/bitcoin/worker_interface.py. Open it in an editor and remove the line containing "X-Roll-NTime" and remove it.
Code:
     @defer.inlineCallbacks
     def _getwork(self, request, data, long_poll):
         request.setHeader('X-Long-Polling', '/long-polling')
-        request.setHeader('X-Roll-NTime', 'expire=10')
         request.setHeader('X-Is-P2Pool', 'true')

done!
full member
Activity: 155
Merit: 100
Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself.
So if I hack the p2pool code, and remove the X-Roll-NTime header altogether as a short term fix, that would make cgminer stop rolling and my problem would be gone? Super easy  Grin I will try it right away.

Thank you!
What is the difference between using long-polling, and not using long-polling, and since it is a 10-second interval anyways, is there any real need for long-polling to begin with?

(requesting a little enlightenment about it)

-- Smoov

Long polling means that the pool is waiting with giving a reply until it actually has new work, and that is good even for short period lengths too, I guess. But this issue isn't about long-polling, I think, but whether the server or the client should increment the timestamp when the client runs out of nonces.

Anyhoooo...

Removing the header S-O-L-V-E-D it!  Man, I'm sooo happy! Cheesy

Do you mind sharing the details of changes?  I see the same issue with my setup from time to time.
Thanks.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself.
So if I hack the p2pool code, and remove the X-Roll-NTime header altogether as a short term fix, that would make cgminer stop rolling and my problem would be gone? Super easy  Grin I will try it right away.

Thank you!
What is the difference between using long-polling, and not using long-polling, and since it is a 10-second interval anyways, is there any real need for long-polling to begin with?

(requesting a little enlightenment about it)

-- Smoov

The LP is not just a notification, it also provides new work that is valid.
So the miner will get the new work immediately rather than having the extra delay of requesting it.
sr. member
Activity: 337
Merit: 252
Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself.
So if I hack the p2pool code, and remove the X-Roll-NTime header altogether as a short term fix, that would make cgminer stop rolling and my problem would be gone? Super easy  Grin I will try it right away.

Thank you!
What is the difference between using long-polling, and not using long-polling, and since it is a 10-second interval anyways, is there any real need for long-polling to begin with?

(requesting a little enlightenment about it)

-- Smoov

Long polling means that the pool is waiting with giving a reply until it actually has new work, and that is good even for short period lengths too, I guess. But this issue isn't about long-polling, I think, but whether the server or the client should increment the timestamp when the client runs out of nonces.

Anyhoooo...

Removing the header S-O-L-V-E-D it!  Man, I'm sooo happy! Cheesy
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Heh, DiabloMiner fixes this by just watching for the X-Is-P2Pool header. Wink
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself.
So if I hack the p2pool code, and remove the X-Roll-NTime header altogether as a short term fix, that would make cgminer stop rolling and my problem would be gone? Super easy  Grin I will try it right away.

Thank you!
What is the difference between using long-polling, and not using long-polling, and since it is a 10-second interval anyways, is there any real need for long-polling to begin with?

(requesting a little enlightenment about it)

-- Smoov
sr. member
Activity: 337
Merit: 252
Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself.
So if I hack the p2pool code, and remove the X-Roll-NTime header altogether as a short term fix, that would make cgminer stop rolling and my problem would be gone? Super easy  Grin I will try it right away.

Thank you!
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Actually this is a major problem. If p2pool is rolling ntime and telling the mining software it can roll ntime, there is nothing to guarantee they wont collide. The expire=10 seconds tells the mining software how long it can roll work for, not how far into the future it can roll time. Potentially the software can roll time up to 2 hours into the future within that 10 seconds, so there can be no guarantee the work source and mining software won't clash. If you're asking the mining software to roll time, you shouldn't be rolling it within p2pool, or vice versa. Since p2pool generates its own work locally, I see no point even asking the mining software to roll the time if it's going to roll it itself. Since rolling time requires a lot less CPU than generating fresh work, it really is a matter of where you want the time to be rolled - the source of the work (in this case p2pool) or the mining software. It should not be done at both ends.
sr. member
Activity: 337
Merit: 252
So I have been having problems, and now at least I think what's going on.

The problem is this:

p2pool increments the getwork timestamp at every request and assumes that the miner will respect X-Roll-NTime which is set to "expire=10". cgminer (and apparently phoenix) doesn't respect that. Sometimes there is a hash collision from two different getwork requests, both rolled past 10 seconds. (see the cgminer thread)

(The check forrestv added yesterday warning about this, is broken since it only catches 1/12 of the shares rolled past 10 seconds, but that's not very important)

Now, my question is what do I do about it? One reason I get more of these than other people, I suspect, is because the miner produces many shares per minute.

- Would the best short term solution be to increase the local difficulty?
- ckolivas suggested setting --scan-time in cgminer to 10 but that doesn't make much of a difference.
- hack p2pool to inrease the 12 second increment to something higher? That seems a bit risky...

other suggestions?
hero member
Activity: 826
Merit: 500
I love the latest from cgminer 2.5.0 change log.

----
 I've also created my own workaround for the biggest problem with existing bitforce devices - the can now abort work as soon as a longpoll hits which means literally half as much work on average wasted across longpoll than previously, and a much lower reject rate. Note these devices are still inefficient across longpoll since they don't even have the support the minirig devices have - and they never will according to bfl. This means you should never mine with them on p2pool.
----

P2pool- The Anti BFL pool
 Grin
legendary
Activity: 1361
Merit: 1003
Don`t panic! Organize!
Looks like you have updated to fresh version that checks and warns about double-sending Smiley
sr. member
Activity: 337
Merit: 252
Quote
2012-07-09 19:31:07.554921 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:13.877677 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:16.723297 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:22.509470 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
2012-07-09 19:31:30.192033 > Miner digger @ 192.168.1.102 rolled timestamp improperly! This may be a bug in the miner that is causing you to lose work!
Angry
Now what?
sr. member
Activity: 337
Merit: 252

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente

If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate.

It counts valid shares and dead-on-arrivals, but not duplicates which are just dropped. So yes, you are right about that.
donator
Activity: 2058
Merit: 1007
Poor impulse control.

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente

If the pool used 100% of shares to estimate the global pool hashrate instead of 91%, then yes. But I think p2Pool only counts valid shares? If not, you might notice 9% less payout, but reporting would be accurate.
legendary
Activity: 2126
Merit: 1001

2012-07-07 09:29:47.581000 > Worker q6600 @ 127.0.0.1 submitted share more than once!
2012-07-07 09:29:47.897000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:49.355000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:50.915000 > Worker miner1 @ 192.168.0.110 submitted share more than once!
2012-07-07 09:29:51.948000 > Worker miner1 @ 192.168.0.110 submitted share more than once!

M

I have this problem too and I've been doing some counting. The number as of now: accepted:29732 duplicates:2330 rejected:426. That is 7-8% duplicate shares of the total and the actual submitted good shares are only 91%.


What if.. those are the missing 10% we see globally on the p2pool stats?

Ente
Jump to: