Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 719. (Read 2591919 times)

legendary
Activity: 1379
Merit: 1003
nec sine labore
You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.
Are you sure ? The way I understand cgminer's threads, they should all try to keep working in parallel (for threads each thread should be using 1/n of the processing power) and fetching work is done asynchronously so that it is ready as soon as a GPU thread is available. So with a given intensity the more threads you have, the more time you should spend working on a workbase invalidated by a long poll. This is how I understood the advice ckovilas gives in the cgminer's README to use only one thread.

gyverlb,

the one thread per GPU was a work-around for old versions of cgminer.

As I understand it, while a GPU is processing a batch, the thread that submitted it is blocked waiting for the answer, so, if you have a single thread it cannot fetch new work before the GPU completes its batch.

Using two threads makes it possible to have the second thread starting to fetch new work while the first one is still waiting for the GPU to finish its work.

I'm using two threads without problems (stales are around 1-2% lower than p2pool ones).

spiccioli.

hero member
Activity: 896
Merit: 1000
You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.
Are you sure ? The way I understand cgminer's threads, they should all try to keep working in parallel (for threads each thread should be using 1/n of the processing power) and fetching work is done asynchronously so that it is ready as soon as a GPU thread is available. So with a given intensity the more threads you have, the more time you should spend working on a workbase invalidated by a long poll. This is how I understood the advice ckovilas gives in the cgminer's README to use only one thread.
legendary
Activity: 1379
Merit: 1003
nec sine labore
If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.
I'm not sure how. I have ~9% reject rate with 5x 2.3.1 cgminer connected to a p2pool node with 5 to 30ms latency. cgminer is set to use only one thread and intensity 8, which on my hardware (300+MH/s for each GPU) adds between 0 to 3 ms latency when cgminer must wait for a GPU thread to return.

If there's a way to get better results, I'd like to know it. Currently I think the large majority of orphan/dead blocks on my configuration are caused by the whole P2Pool network latency, not my configuration but I'd be glad to be proven wrong.

gyverlb,

same here, at times I'm a little better than the pool, at time a little worse.

You can use two threads per GPU, though, so that when a long poll comes in, one thread can start fetching new data while the other is waiting for the GPU to finish.

spiccioli
hero member
Activity: 896
Merit: 1000
If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.
I'm not sure how. I have ~9% reject rate with 5x 2.3.1 cgminer connected to a p2pool node with 5 to 30ms latency. cgminer is set to use only one thread and intensity 8, which on my hardware (300+MH/s for each GPU) adds between 0 to 3 ms latency when cgminer must wait for a GPU thread to return.

If there's a way to get better results, I'd like to know it. Currently I think the large majority of orphan/dead blocks on my configuration are caused by the whole P2Pool network latency, not my configuration but I'd be glad to be proven wrong.
legendary
Activity: 1064
Merit: 1000
There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?

Yeah even if space/bandwidth wasn't an issue I don't like complicating the sharechain w/ merge mining data.  Most of the alt chains are nearly worthless and I wonder the load if it became popular to merge p2pool mine a dozen or more alt-chains.

Local generation may be rough on low end nodes so anything which makes p2pool less viable isn't worth the cost IMHO.

Would it be possible to have a separate merge mining chain and a different p2pool instance.  Still I am not clear on what level of communication or interaction is necessary between instances or even if it is possible.

Given the nearly worthless nature of alt-coins I don't see it as a useful venture.  There is so much that can be done to improve p2pool (in terms of GUI frontends, monitoring/reporting, updated docs, custom distros, simplification, etc) that I would hate to see any skill, resources, and time devoted to worthless alt-chains.

2x
sr. member
Activity: 447
Merit: 250
p2pool randomly freezes up (freezing my Mac for about ~10 seconds) every half an hour or so. Any idea what's causing this? Should I use a different version of python?

Code:
2012-04-11 07:47:16.501037 > Watchdog timer went off at:
2012-04-11 07:47:16.501107 >   File "run_p2pool.py", line 5, in
2012-04-11 07:47:16.501141 >     main.run()
2012-04-11 07:47:16.501172 >   File "/Users/christian/p2pool/p2pool/main.py", line 1005, in run
2012-04-11 07:47:16.501203 >     reactor.run()
2012-04-11 07:47:16.501234 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 1169, in run
2012-04-11 07:47:16.501267 >     self.mainLoop()
2012-04-11 07:47:16.501297 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 1178, in mainLoop
2012-04-11 07:47:16.501331 >     self.runUntilCurrent()
2012-04-11 07:47:16.501361 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/base.py", line 800, in runUntilCurrent
2012-04-11 07:47:16.501394 >     call.func(*call.args, **call.kw)
2012-04-11 07:47:16.501424 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 368, in callback
2012-04-11 07:47:16.501456 >     self._startRunCallbacks(result)
2012-04-11 07:47:16.501487 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 464, in _startRunCallbacks
2012-04-11 07:47:16.501520 >     self._runCallbacks()
2012-04-11 07:47:16.501550 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 551, in _runCallbacks
2012-04-11 07:47:16.501583 >     current.result = callback(current.result, *args, **kw)
2012-04-11 07:47:16.501614 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 1101, in gotResult
2012-04-11 07:47:16.501647 >     _inlineCallbacks(r, g, deferred)
2012-04-11 07:47:16.501677 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.0.0-py2.7-macosx-10.5-intel.egg/twisted/internet/defer.py", line 1045, in _inlineCallbacks
2012-04-11 07:47:16.501710 >     result = g.send(result)
2012-04-11 07:47:16.501740 >   File "/Users/christian/p2pool/p2pool/main.py", line 799, in status_thread
2012-04-11 07:47:16.501770 >     print this_str
2012-04-11 07:47:16.501799 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 81, in write
2012-04-11 07:47:16.501830 >     self.inner_file.write(data)
2012-04-11 07:47:16.501860 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 69, in write
2012-04-11 07:47:16.501891 >     self.inner_file.write('%s %s\n' % (datetime.datetime.now(), line))
2012-04-11 07:47:16.501921 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 55, in write
2012-04-11 07:47:16.501951 >     output.write(data)
2012-04-11 07:47:16.501981 >   File "/Users/christian/p2pool/p2pool/util/logging.py", line 46, in write
2012-04-11 07:47:16.502011 >     self.inner_file.write(data)
2012-04-11 07:47:16.502041 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 691, in write
2012-04-11 07:47:16.502073 >     return self.writer.write(data)
2012-04-11 07:47:16.502103 >   File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/codecs.py", line 352, in write
2012-04-11 07:47:16.502134 >     self.stream.write(data)
2012-04-11 07:47:16.558924 >   File "/Users/christian/p2pool/p2pool/main.py", line 702, in
2012-04-11 07:47:16.559463 >     sys.stderr.write, 'Watchdog timer went off at:\n' + ''.join(traceback.format_stack())
donator
Activity: 1218
Merit: 1079
Gerald Davis
There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?

Yeah even if space/bandwidth wasn't an issue I don't like complicating the sharechain w/ merge mining data.  Most of the alt chains are nearly worthless and I wonder the load if it became popular to merge p2pool mine a dozen or more alt-chains.

Local generation may be rough on low end nodes so anything which makes p2pool less viable isn't worth the cost IMHO.

Would it be possible to have a separate merge mining chain and a different p2pool instance.  Still I am not clear on what level of communication or interaction is necessary between instances or even if it is possible.

Given the nearly worthless nature of alt-coins I don't see it as a useful venture.  There is so much that can be done to improve p2pool (in terms of GUI frontends, monitoring/reporting, updated docs, custom distros, simplification, etc) that I would hate to see any skill, resources, and time devoted to worthless alt-chains.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Orphans aren't wasted hashing power for the p2pool "pool" which was what was being discussed.  The node will broadcast any blocks it finds to all p2pool peers and all Bitcoin peers.  Thus even a miner with 80% oprhan rate isn't wasting his hashing power from the point of view being disccused which is avg # of shares per block (or pool luck).

I think it is made pretty clear one's PERSONAL compensation depends on relative orphan rate.

Miner has 5% orphan rate, p2pool has 10% orphan rate.  Miner is compensated 5% over "fair value".
Miner has a 10% oprhan rate, p2pool has a 10% orphan rate.  Miner is compensated "fair value".
Miner has a 15% oprhan rate, p2pool has a 10% orphan rate.  Miner is compensated 5% under "fair value".

Quote
Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.
Even there the hashing power isn't WASTED.  Blocks will still be found at same rate regardless of orphan rate but the miner's compensate will be lower (due to miner having higher orphan rate relative to the pool).


Still theoretically  I do think it is possible to make a "merged" sharechain.  Bitcoin must have a single block at each height.  This is an absolute requirement due to the fact that blocks are just compensation they include tx and there must be a single consensus on which tx is included in a block (or set of blocks).

With p2pool it may be possible to include "late shares" in the chain to reduce the orphan rate.  Honestly not sure if it is worth it because as discussed if one's oprhan rate is ~= pools orphan rate the absolute values don't really matter.

Miner 0% orphan, pool 0% orphan is the same as Miner 10% orphan, pool 10% orphan.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.

Except reject rate means nothing, delta of average reject rate is what you need to pay attention to.

Also, BFL's firmware is broken, they won't return shares until its done 2^32 hashes, and any attempts to force it to update on long polll dumps valid shares. BFL needs to fix their shit before they sell any more FPGAs.
hero member
Activity: 516
Merit: 643
Any plans to implement merged mining on P2Pool?

First, P2Pool has long supported solo merged mining. However, as for pooled merged mining...

Merged mining, as it exists, can not efficiently be implemented because every share would need to include the full generation transaction from the parent chain. However, P2Pool's generation transaction is pretty large, and so would increase the size of P2Pool shares by more than an order of magnitude (along with P2Pool's network usage).

There are a few solutions: compute main-P2Pool's generation transaction instead of redundantly  storing nearly the same thing over and over. Alternatively, change the merged mining spec to not require storing the entire parent gentx.

I don't like the first because it would be very complex and tie the MM-P2Pool to the main-P2Pool. The second is obviously impractical in the short term.

Anyone else have ideas?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Except ... it isn't correct.

There IS wasted hashing power.

The problem is not P2Pool specifically, but rather that people believe it is OK to get a high crappy reject rate (9%) because someone here said it was OK to be that high rate while they were getting a much lower rate.

If you use a good miner program and configure it correctly you will not get a high crappy 9% reject rate.

The cause is actually that the miners are not by default configured to handle the ridiculously high share rate (10 seconds)
So P2Pool is the cause, but the solution is simply to configure your miner to handle that issue.

Aside: if you have one or more BFL FPGA Singles, you cannot mine on P2Pool without wasting a large % of your hash rate.
legendary
Activity: 1379
Merit: 1003
nec sine labore
There is no such thing as "wasting hashing power".

...

D&T

this should go, IMHO, into 1st page and/or p2pool wiki.

spiccioli
hero member
Activity: 506
Merit: 500
Any plans to implement merged mining on P2Pool?
legendary
Activity: 1379
Merit: 1003
nec sine labore
DeathAndTaxes,

very clear and very much appreciated!

Thanks.

spiccioli.

ps. this is where I heard a click inside my  brain Wink

Quote
If we awarded compensation based on actual value of nonces found it is .... solo mining.  Pool mining simply finds a mechanism to FAIRLY distribute that 50 BTC to reduce variance.  We track FAILED WORTHLESS WORK because it can't be cheated.  Nothing more, nothing less.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
There is no such thing as "wasting hashing power".
You either find a block or the share is worthless.
If some of the worthless shares get orphaned how much have you lost?

p2pool doesn't show any higher (~1% of main fork) orphan rate ON BLOCKS.

p2pool could function with a 99.999% SHARE orphan rate but the variance and unfairness of that would cause PR (not technical) problems.

10 second LP is a compromise between share orphans and difficulty.

The thing that seems hard for people to understand is shares have ABSOLUTELY NO VALUE.  They aren't progress towards a block they are completely worthless.  We simply use them because it is a cheat proof mechanism to fairly split rewards.  Nothing more, nothing less.  So if 10% of worthless shares are lost how much value/blocks/work is lost? .... Nothing.  10% of 0 is still 0. Smiley

Only DOA affect the rate blocks are found.  DOA are bad/stale/invalid shares before they even get broadcast to p2pool network.  Thus even if they were blocks they are worthless because Bitcoin network (not just p2pool) would find them worthless even if they met target requirements for a block.  I would assume that any hashing graph is using post DOA hashrates.  Still DOA looks to only be about 2% of the network.

All of this so goddamned much.
donator
Activity: 1218
Merit: 1079
Gerald Davis
There is no such thing as "wasting hashing power".
You either find a block or you don't.  If you don't find a block the nonce is worthless.  I don't mean worth little, I mean absolutely worthless.
Nonces which don't solve a block are liked losing lottery tickets.  Failed attempted aren't progress towards a block.  A large number of failed tries doesn't increase the chance of finding a block in the future.  The only share worth anything is the one that solves a block and at current difficulty that occurs once every 2^32 * 1,626,856.73 =  6,987,296,450,627,500 nonces.

1 nonce = 50 BTC
6,987,296,450,627,499 nonces = 0 BTC

If we awarded compensation based on actual value of nonces found it is .... solo mining.  One would either get 50 BTC or 0 BTC for every nonce hashed.  Pool mining is a method to FAIRLY and EQUITABLY distribute that same 50 BTC to reduce variance.  We track FAILED WORTHLESS WORK because it can't be cheated.  Nothing more, nothing less.

Pools are more like block insurance companies.  In keeping with "no/limited trusted 3rd party" mantra of Bitcoin we use cryptography to ensure work can't be faked.  A pool could use a custom miner which records nonces (like a nonce odometer) and everytime a block is found collects nonce recordings from all miners and splits the block by how many nonces each miner attempted. The obvious problem is that a hacked miner would allow a miner to cheat.    Since hashes of lower difficulty can't be faked they provide good "proof" of the aproximate # of hashes attempted (subject to variance).

The 10% of work which was attempted (but failed) and then was orphaned needs to be included in the stats because it was valid work attempted. The fact that the technicalities of p2pool oprhaned it from the sharechain (compensated work) doesn't change that fact.  

It is just layers of abstraction.
Actual Work: # of nonces accurately and timely hashed (it takes on average ~7 quadrillion hashes to find one which meets current difficulty target)
Proxy for Work: nonces meeting share difficulty (1 for most pools, ~600 for p2pool) thus one p2pool share (diff 600) is a PROXY for 2.58 trillion nonces attempted.
Compensated Work: shares included in the chain (excludes orphaned shares which are a technical limit of p2pool short LP time)

One could say a solominer oprhans (discards) 100% of the failed work.  You wouldn't say a solo miner finds 1 block per share right?  The attempted work even if discarded needs to be recorded.  

If your orphan rate is ~= pool's orphan rate then your % of compensated work (shares in sharechain) ~= your % of proxy for work (shares) ~= your % of actual work (nonces hashed).


Orphaned blocks
p2pool doesn't show any higher (~1% of main fork) orphan rate ON BLOCKS.  If p2pool had 10% of its BLOCKS orphaned that would obviously cause a reduction in revenue and corresponding increase in # of shares per block.  The orphan rate on p2pool blocks is a better indication of "lost work".  An abnormally high number would indicate a problem/delay w/ network broadcasting its blocks.  Keep in mind even here relative values is what matters.  True should be 1% higher due to orphaned blocks (yes orphaned blocks represent work).  If there were no oprhaned blocks collectively miners wouldn't earn any more.

Dead on arrival (bad/stale/invalid shares killed by local node)
DOA are bad/stale/invalid shares before they even get broadcast to p2pool network and can never be a block even if they meet diff target as they would be rejected by the Bitcoin network.  They are "wasted hashing power".  DOAs unlike orphans do affect the # of shares per block.     If 2% of shares are bad/stale/invalid (not just orphaned) then 2% of your blocks will be bad/stale/invalid.  This affects all pools not just p2pool.  I would assume that any hashing graph is using post DOA hashrates.  Still DOA is a much smaller % than orphans so the difference isn't very large.

On edit: modified for clarification (of course that made it even longer ... grr).
On edit 2:  crap crap crap.  tried to simplify and turned it into a novel ("as the nonce hashes").  Sorry for wall of text.  I don't have the heart to rip it up now.
legendary
Activity: 1379
Merit: 1003
nec sine labore
Well 6 blocks in last 24 hours is a good start.   But dug ourselves a deep hole over the last 7 days.

D&T,

given this from p2pool wiki

Code:
Because the sharechain is 60 times faster than the Bitcoin chain many stales are common and expected. However, because the payout is PPLNS only your stale rate relative to other nodes is relevant; the absolute rate is not.

and given 90 days luck around 90% and p2pool as a whole stale rate around 10%, can it be that stale rate is not relevant for miners but becomes relevant for p2pool as a whole?

Or to say it in another way: p2pool wastes around 10% of its hashing power and because of this all graphs which use p2pool hashing power, without subtracting stale rate, report a higher than correct expected blocks?

spiccioli.
hero member
Activity: 896
Merit: 1000
just got port forwarding setup so connections for both bitcoin and p2pool are on the rise

Incoming connections on p2pool too I guess?
Do you have a static ip address?

Ente

only bitcoin at the moment

strangely not yet in p2pool but it's only been about 6 hours. think someone mentioned on this forum that it starts happening around 8 hours? so we'll see


static-ish. router will keep the ip as long as it has power - will get a new ip if down for more then the couple of minutes it takes to reboot


thinking of setting this up on a lightly used personal server in the datacenter with my business stuff which would have a static /dedicated ip available
legendary
Activity: 2126
Merit: 1001
just got port forwarding setup so connections for both bitcoin and p2pool are on the rise

Incoming connections on p2pool too I guess?
Do you have a static ip address?

Ente
donator
Activity: 1218
Merit: 1079
Gerald Davis
Well 6 blocks in last 24 hours is a good start.   But dug ourselves a deep hole over the last 7 days.
Jump to: