Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 116. (Read 2591920 times)

sr. member
Activity: 266
Merit: 250
Was this "tremor in the force" when the changeover happened?:

 

Orphan rate has been rising steadily ever since - now at 25%?

Edit: 4 of my last 5 shares are orphaned also  Sad
full member
Activity: 157
Merit: 103

version 14?
This is not running now, just saved screenshot. 

ps: sorry for picture size, uploaded from tablet
full member
Activity: 238
Merit: 100
Local rate: 1.49TH/s (7.8% DOA) Expected time to share: 53.7 minutes
Shares: 2 total (0 orphaned, 0 dead) Efficiency: 120.0%
Actually 53 minutes and 2 'live' shares are not enough for statistics.
For the best p2pool performance it is much better to run own local node near your miners, here is example:



This was private node with 100% fee for testing purposes.  Lower miners qty per one node works much better also.


The same node with remote miners with good connectivity and ping below 40ms give you also quite a lot of DOA hashrate about ~10% and more DOA shares also. But I never tried p2pool w/ ckproxy.. I think this kind of design issue..

version 14?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Local rate: 1.49TH/s (7.8% DOA) Expected time to share: 53.7 minutes
Shares: 2 total (0 orphaned, 0 dead) Efficiency: 120.0%
Actually 53 minutes and 2 'live' shares are not enough for statistics.
For the best p2pool performance it is much better to run own local node near your miners, here is example:
We're aware of that. This is an experiment on improving node performance for multiple miners.
donator
Activity: 4760
Merit: 4323
Leading Crypto Sports Betting & Casino Platform
I've been working with some experimental ckproxy code designed to consolidate multiple user logins into different upstream connections.

This makes me happy.
full member
Activity: 157
Merit: 103
Local rate: 1.49TH/s (7.8% DOA) Expected time to share: 53.7 minutes
Shares: 2 total (0 orphaned, 0 dead) Efficiency: 120.0%
Actually 53 minutes and 2 'live' shares are not enough for statistics.
For the best p2pool performance it is much better to run own local node near your miners, here is example:



This was private node with 100% fee for testing purposes.  Lower miners qty per one node works much better also.


The same node with remote miners with good connectivity and ping below 40ms give you also quite a lot of DOA hashrate about ~10% and more DOA shares also. But I never tried p2pool w/ ckproxy.. I think this kind of design issue..
full member
Activity: 238
Merit: 100
Here's what I used:

Code:
#p2pool uses twisted, and twisted uses zope.interface, and in order to install either one you need setuptools, so let's start with that:

wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo pypy
sudo rm setuptools-18.3.2.zip

#Then zope.interface:

wget https://pypi.python.org/packages/source/z/zope.interface/zope.interface-4.1.3.tar.gz#md5=9ae3d24c0c7415deb249dd1a132f0f79
tar zxf zope.interface-4.1.3.tar.gz
cd zope.interface-4.1.3/
sudo pypy setup.py install
cd ..
sudo rm -r zope.interface-4.1.3*

#Then Twisted:

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.4.0.tar.bz2
tar jxf Twisted-15.4.0.tar.bz2
cd Twisted-15.4.0
sudo pypy setup.py install
cd ..
sudo rm -r Twisted-15.4.0*

Hope you got loads of RAM, cos it sucks the living daylights out of it  Smiley

Thanks for that, firing it up now. It's an 8GB VPS.

much better now

Local rate: 1.49TH/s (7.8% DOA) Expected time to share: 53.7 minutes
Shares: 2 total (0 orphaned, 0 dead) Efficiency: 120.0%
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Here's what I used:

Code:
#p2pool uses twisted, and twisted uses zope.interface, and in order to install either one you need setuptools, so let's start with that:

wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo pypy
sudo rm setuptools-18.3.2.zip

#Then zope.interface:

wget https://pypi.python.org/packages/source/z/zope.interface/zope.interface-4.1.3.tar.gz#md5=9ae3d24c0c7415deb249dd1a132f0f79
tar zxf zope.interface-4.1.3.tar.gz
cd zope.interface-4.1.3/
sudo pypy setup.py install
cd ..
sudo rm -r zope.interface-4.1.3*

#Then Twisted:

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.4.0.tar.bz2
tar jxf Twisted-15.4.0.tar.bz2
cd Twisted-15.4.0
sudo pypy setup.py install
cd ..
sudo rm -r Twisted-15.4.0*

Hope you got loads of RAM, cos it sucks the living daylights out of it  Smiley

Thanks for that, firing it up now. It's an 8GB VPS.
sr. member
Activity: 266
Merit: 250

EDIT: What package does pypy use for twisted? I get     from twisted.internet import defer, reactor, protocol, tcp ImportError: No module named twisted


Here's what I used:

Code:
#p2pool uses twisted, and twisted uses zope.interface, and in order to install either one you need setuptools, so let's start with that:

wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo pypy
sudo rm setuptools-18.3.2.zip

#Then zope.interface:

wget https://pypi.python.org/packages/source/z/zope.interface/zope.interface-4.1.3.tar.gz#md5=9ae3d24c0c7415deb249dd1a132f0f79
tar zxf zope.interface-4.1.3.tar.gz
cd zope.interface-4.1.3/
sudo pypy setup.py install
cd ..
sudo rm -r zope.interface-4.1.3*

#Then Twisted:

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.4.0.tar.bz2
tar jxf Twisted-15.4.0.tar.bz2
cd Twisted-15.4.0
sudo pypy setup.py install
cd ..
sudo rm -r Twisted-15.4.0*

Hope you got loads of RAM, cos it sucks the living daylights out of it  Smiley
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I do not know how much part of DOA is proxy doing but you have probably running p2pool node with python but I am with pypy. It can does something.
I'll try changing it to pypy then. Restarting shortly.

EDIT: What package does pypy use for twisted? I get     from twisted.internet import defer, reactor, protocol, tcp ImportError: No module named twisted
full member
Activity: 238
Merit: 100
case DOA

my node:
Local rate: 1.52TH/s (1.1% DOA) Expected time to share: 1.4 hours
Shares: 76 total (6 orphaned, 2 dead) Efficiency: 107.4%

de.ckpool.org node:
Local rate: 1.55TH/s (21% DOA) Expected time to share: 1.5 hours
Shares: 8 total (1 orphaned, 2 dead) Efficiency: 77.60%

there is SP20 running both

ping de.ckpool.org
PING de.ckpool.org (84.200.2.30) 56(84) bytes of data.
64 bytes from 84.200.2.30: icmp_seq=1 ttl=53 time=33.3 ms

Yeah that looks pretty sad doesn't it. Not sure why it's quite so bad but anyway let's call this experiment off for the time being shall we?

Thanks very much for testing, I may come back with more later.

I do not know how much part of DOA is proxy doing but you have probably running p2pool node with python but I am with pypy. It can does something.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
case DOA

my node:
Local rate: 1.52TH/s (1.1% DOA) Expected time to share: 1.4 hours
Shares: 76 total (6 orphaned, 2 dead) Efficiency: 107.4%

de.ckpool.org node:
Local rate: 1.55TH/s (21% DOA) Expected time to share: 1.5 hours
Shares: 8 total (1 orphaned, 2 dead) Efficiency: 77.60%

there is SP20 running both

ping de.ckpool.org
PING de.ckpool.org (84.200.2.30) 56(84) bytes of data.
64 bytes from 84.200.2.30: icmp_seq=1 ttl=53 time=33.3 ms

Yeah that looks pretty sad doesn't it. Not sure why it's quite so bad but anyway let's call this experiment off for the time being shall we?

Thanks very much for testing, I may come back with more later.
full member
Activity: 238
Merit: 100
case DOA

my node:
Local rate: 1.52TH/s (1.1% DOA) Expected time to share: 1.4 hours
Shares: 76 total (6 orphaned, 2 dead) Efficiency: 107.4%

de.ckpool.org node:
Local rate: 1.55TH/s (21% DOA) Expected time to share: 1.5 hours
Shares: 8 total (1 orphaned, 2 dead) Efficiency: 77.60%

there is SP20 running both

ping de.ckpool.org
PING de.ckpool.org (84.200.2.30) 56(84) bytes of data.
64 bytes from 84.200.2.30: icmp_seq=1 ttl=53 time=33.3 ms
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I *believe* (and may be incorect) that p2pool will wait for a valid share (pool wide) before sending new work @ 30 seconds on average. I recall a couple years ago discussing this with someone. As P2Pool does not subscribe to bitcoind blocknotify it is not aware of the block change...

If I am right and this is the case, setting up a blocknotify flag in Bitcoin.conf and integrating it into the code should not be to hard, but I suspect there is a reason Forrest set it up this way? (Or maybe -blocknotify did not exist then?)
Yeah sorry this was my mistake at misreading what debug output I had so it was a false positive.

Investigated further and added debugging to see when GBT was called since calls to CNB are cached and it is indeed calling GBT immediately after a blockchange, so this was all my fault for not noticing, sorry about the noise.
legendary
Activity: 1258
Merit: 1027
Edit:  Ouch - that DOA rate........
Almost 3 am, but...

What's normal/good?

Mine sits at about 3% - but it's a local private node.

Yeah, don't know what to make of it. The overnight run saw 9 shares come in total without any orphans or dead but of course you need about 100 shares to get a fair idea of the real efficiency. I can understand if you move your miner away and appreciate the testing you did initially to confirm it works though, thanks.

It's been a while since I've played with p2pool though and something is really bugging me that I spot in the logs. I was wondering if anyone could answer me since my python knowledge is almost non-existent. On block changes, I'm NOT seeing p2pool immediately ask for a new block template when looking at my bitcoind logs which I have extra debugging on. Take for example this last block:

The block came in here:
Code:
2015-12-12 21:11:05.242988 UpdateTip: new best

And p2pool asked for the first newblock template here:
Code:
2015-12-12 21:11:23.387677 CreateNewBlock

Note the timestamps. There are 18 seconds between the block change and the first block template request. Now on my pool software I get a new template within milliseconds of the block update.

So here's the question. Is p2pool by design not updating immediately on block changes and hashing on stale work? Am I missing something?

edit: see Forrest's answer above...

I *believe* (and may be incorrect) that p2pool will wait for a valid share (pool wide) before sending new work @ 30 seconds on average. I recall a couple years ago discussing this with someone. As P2Pool does not subscribe to bitcoind blocknotify it is not aware of the block change...

If I am right and this is the case, setting up a blocknotify flag in Bitcoin.conf and integrating it into the code should not be to hard, but I suspect there is a reason Forrest set it up this way? (Or maybe -blocknotify did not exist then?)
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
hero member
Activity: 516
Merit: 643
Hmm, can you apply this[1] patch to P2Pool and post the relevant log sections of both bitcoind and P2Pool?

1: http://im.forre.st/pb/80707844.txt
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
P2Pool should immediately call getblocktemplate after it knows that there is a new block. It is notified that there is a new block via its P2P connection to bitcoind, through which bitcoind sents an inv message when it has a new block. The code that does this is the work_poller in node.py. The reason that you saw a 18 second delay is something within that loop. Maybe bitcoind was lagging and took a long time to announce the block after it printed "new best"? Or maybe P2Pool was lagging and took a long time to respond. Is the delay always long or just for this one block?
Thanks for that. No this bitcoind is very rapid with its responses. It's set up the same as I use for my own pool. This happened routinely, not just the one block though. Here are the 3 before it:

Code:
2015-12-12 21:51:12.963648 UpdateTip:
2015-12-12 21:51:32.679893 CreateNewBlock

2015-12-12 21:44:06.200074 UpdateTip:
2015-12-12 21:44:26.030397 CreateNewBlock

2015-12-12 21:42:07.479248 UpdateTip:
2015-12-12 21:42:26.094565 CreateNewBlock
I'm running current git master head.

That said, P2Pool will generate its own work if it knows that there is a new block (via its P2Pool peers) and bitcoind hasn't given it new work yet. These blocks will be empty, though. It's just a stopgap solution for bitcoind sometimes taking a long time to catch up.

EDIT:

Perhaps that actually explains these recent blocks then which are empty:
https://www.blocktrail.com/BTC/block/00000000000000000842de6ff4793f59ab08139a253f7e5622926f9d470c1ae9
https://www.blocktrail.com/BTC/block/000000000000000010e4a73a4abf8bd304809c021d636b1f39013ddf6e437d3c
hero member
Activity: 516
Merit: 643
P2Pool should immediately call getblocktemplate after it knows that there is a new block. It is notified that there is a new block via its P2P connection to bitcoind, through which bitcoind sents an inv message when it has a new block. The code that does this is the work_poller in node.py. The reason that you saw a 18 second delay is something within that loop. Maybe bitcoind was lagging and took a long time to announce the block after it printed "new best"? Or maybe P2Pool was lagging and took a long time to respond. Is the delay always long or just for this one block?

That said, P2Pool will generate its own work if it knows that there is a new block (via its P2Pool peers) and bitcoind hasn't given it new work yet. These blocks will be empty, though. It's just a stopgap solution for bitcoind sometimes taking a long time to catch up.

In other news, a lot of users (or one big user) upgraded to v15 in the last day, and we'll probably be fine when BIP65 takes effect.
sr. member
Activity: 266
Merit: 250
Well I've never really investigated the work model in p2pool and perhaps it's building on the work in its own chain somehow and not dependent on bitcoind?

That's pretty much how I understand it to work, but I'm sure there's a little more to it than that.
Jump to: