Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 33. (Read 2591916 times)

full member
Activity: 175
Merit: 100
I tried to run the new 1mb_segwit version but I get  Coin daemon too old! Upgrade! I am running Bitcoin Core 14.2. Does this have something to do with -rpcserialversion which I should not have to set as the default should be 1?

The lowmem version works fine.
hero member
Activity: 818
Merit: 1006
I think I've solved the performance issue with 1mb_segwit. It looks like the PR that veqtrus put on my github to correct what he perceived as a couple of omissions on my part for the old (2014?) jtoomim_performance code had an error. Specifically, it called VariableDict.transitioned.happened() on the known_txs variable whenever a transaction was added or deleted from known_txs variable. The purpose of the jtoomim_performance code was to only call transitioned.happened() when a full assignment was done on known_txs var, since transitioned.happened() calls subsequent code that reviews the full contents of known_txs -- O(n) versus 'mempool' size-- instead of just the added or removed items -- O(1) versus 'mempool' size. Even worse, the place that veqtrus added the call to transitioned.happened() causes this O(n) operation to happen once for every transaction that's added and once for every transaction that's removed, instead of once per batch of transactions added and removed as per the p2pool master code, so in many circumstances that PR would perform around 10x worse than the original p2pool master code.

This bug should not affect veqtrus's base segwit PR on the p2pool/p2pool github, but it likely will affect the branch veqtrus has been working on in his own github repo that merges some performance fixes (plus this performance regression masquerading as a code refactor) into his segwit branch.

I reverted that code from 1mb_segwit and pushed the changes to github, and am now running it on ml.toom.im:9332, and it seems to be performing as well as the lowmem branch. I will likely switch all of my nodes over to 1mb_segwit before the eclipse.
newbie
Activity: 43
Merit: 0
okey, thanks for you explainment ! Cheesy

so i will change my p2pool node to jtoomim branche befor 22th aug.
But i would like to wait for our test ;-)

hope you post it here if finish. and wich is the right github adress ... (i dont like github :-p)
member
Activity: 107
Merit: 10
The issue is that p2pool will refuse to mine if a softfork is detected which is not in the p2pool/network/(coinname).py:SOFTFORKS-REQUIRED list. It may be best to remove that check entirely, as it seems to be explicitly anti-forwards-compatible, which sounds like it may be a bad idea. Still thinking it over.
GBT indicates which forks require changes to mining software by prepending an "!" to their names. Therefore p2pool should stop mining when such fork is detected. Currently Core doesn't treat segwit as required since a miner can mine old-style blocks so if "Fail providing work on unknown rule activation" were to be backported it would still work.
member
Activity: 107
Merit: 10
Edit: Profiling shows that the performance issue seems to be in p2p.py:update_remote_view_of_my_known_txs(). This sounds like the performance issue that I addressed a couple years ago in the jtoomim_performance branch. My guess is that veqtrus's code updated the known_txs var somewhere using assignment (which is O(n^2) versus transactions per second) instead of inserts and deletions (which are O(n)).
My code doesn't modify that variable, only reads it.
hero member
Activity: 818
Merit: 1006
Xantus:

The old p2pool code (https://github.com/p2pool/p2pool master) does not support SegWit. There are two branches of code that do support SegWit. One is veqtrus's SegWit PR, and the other is my 1mb_segwit code, which is derived from veqtrus's code.

The main difference between veqtrus's version and my version is that my version is based on the jtoomimnet hardfork, which I made a few months ago to solve a problem with p2pool being unable to create 999 kB blocks. The jtoomimnet code handles higher transaction volumes than the mainnet p2pool code, but it also contains several performance optimizations, so overall resource requirements are about the same for the two networks. The jtoomimnet code also contains some changes to the protocol that reduce orphan rates, which makes jtoomimnet more fair to large and small miners. The creation of jtoomimnet required changing consensus rules of p2pool, so it had to be a hardfork. I wrote the code in such a way to leave the original p2pool functioning normally, so only those who wanted to use my upgrades needed to switch their code. Thus, we currently have two networks.

I want to do some more testing and optimization on the 1mb_segwit branch first, but I intend to switch jtoomimnet over to 1mb_segwit before SegWit activates.
hero member
Activity: 818
Merit: 1006
We lost power at our datacenter this morning. When our p2pool servers automaticallhy restarted after we regained power, they ran the branch of p2pool I happened to have checked out on my nodes, which was 1mb_segwit. This was not intentional, as I meant to run lowmem. We mined a bunch of v33 shares, and you will start to see a message saying:

Code:
Warning: A MAJORITY OF SHARES CONTAIN A VOTE FOR AN UNSUPPORTED SHARE IMPLEMENTATION! (v33 with 71% support) An upgrade is likely necessary. Check http://p2pool.forre.st/ for more information.

if you're on jtoomimnet. Please disregard this message for now. We will be switching to 1mb_segwit eventually in order to support SegWit/SegWit2x, but not quite yet, as there are still some performance bugs at least in the 1mb_segwit branch.
newbie
Activity: 43
Merit: 0
sorry to bugging you people, but i do not understand wath you are doing here.

next few Days would Sergwit be aktivated on Bitcoin network, around 22th Aug as i think. Do we have an problem with that with old p2pool node ?


And if i look on my P2Pool Shares of my own small P2Pool node there would be mined an Block Version 536870914. that is the same as post before from "sawa"
Mining on Jtoomim ... right ? - signaling Sergwit al the time ...

wath you talking about an 1MB sergwit fork ? - does that mean the Sergwit aktivation on 22th Aug ?

would be nice if someone could explain me these things  Undecided
legendary
Activity: 1308
Merit: 1011
I've switched over my nodes to jtoomimnet.
Now both http://crypto.office-on-the.net:9334 and http://crypto.office-on-the.net:9332 work in one network

Who was mining at port 9334 can connect your asics to port 9332, if you do not want to pay me 1% fee - there's 0% fee.
It is better to switch to http://crypto.mine.nu:9334 or http://crypto.mine.nu:9332 - this is a faster channel to the same server where crypto.office-on-the.net is
hero member
Activity: 818
Merit: 1006
has switched over to jtoomimnet.



Code:
git clone https://github.com/jtoomim/p2pool.git
cd p2pool
git checkout lowmem

Edit: Nope, we just got a short-term nicehash renter or something.
legendary
Activity: 1512
Merit: 1012




Interesting, the regular 1,6 PH/s ...
hero member
Activity: 818
Merit: 1006
I'm not seeing higher memory consumption on 1mb_segwit, but I am seeing what appears to be excessive CPU usage and longer GBT times plus very high DOA rates (~5%). I just restarted the server with profiling enabled and will look into it further.

veqtrus used some dynamic share format code that would either create a share with witness data or without depending on the runtime value of the VERSION attribute instead of putting the segwit code in a different class, and that might be messing up pypy's JIT compilation somehow. Or it could be something else. We'll find out soon enough.

Edit: Profiling shows that the performance issue seems to be in p2p.py:update_remote_view_of_my_known_txs(). This sounds like the performance issue that I addressed a couple years ago in the jtoomim_performance branch. My guess is that veqtrus's code updated the known_txs var somewhere using assignment (which is O(n^2) versus transactions per second) instead of inserts and deletions (which are O(n)).
sr. member
Activity: 351
Merit: 410
That is possible, but I would deem that worthy of a warning at most. I suspect that that misconfiguration is not common enough to merit the programming time.

Perhaps an addition to P2Pool's setup instructions, then, alongside the instructions for setting up bitcoind's RPC server.

Update: I restarted my node two hours ago. Memory consumption 15 minutes after startup was recorded as 1.84 GB. Memory consumption then stabilized and remains at 1.85 GB as of 20 minutes ago (i.e., just over two hours after the restart).

Update 2: Interestingly, memory consumption is now flatlining at 1.84 GB, four hours after the restart.
hero member
Activity: 818
Merit: 1006
That is possible, but I would deem that worthy of a warning at most. I suspect that that misconfiguration is not common enough to merit the programming time.
sr. member
Activity: 351
Merit: 410
frodocooper, I'm now running a 1mb_segwit node and a lowmem node in parallel on the same machine. I'll keep an eye on memory consumption. If I can replicate it, I may look into the issue further and try to solve it if I can do so quickly.

Thanks, jtoomim.

Update: I restarted my node two hours ago. Memory consumption 15 minutes after startup was recorded as 1.84 GB. Memory consumption then stabilized and remains at 1.85 GB as of 20 minutes ago (i.e., just over two hours after the restart).

If I remove p2pool's fast block prop code, I might add a check at startup (for the bitcoin and bitcoincash networks only) to make sure that bitcoind has some sort of fast block prop set up, whether it's xthin, CB, Falcon, or FIBRE, and make p2pool not mine at all unless one is present or unless a command-line override flag is set. This would be to prevent lazy or incompetent p2pool sysadmins from driving up p2pool's orphan rate.

Would it be possible to also include checks to make sure that each P2Pool node's bitcoind has its blockmaxsize and blockmaxweight parameters set to the network's maximum limits (i.e., 1 MB and 4 MB respectively for Bitcoin and whatever limits Bitcoin Cash is using), or at least as close to the limits as possible? There may yet be a significant number of P2Pool nodes that continue to run with Bitcoin Core's default blockmaxsize and blockmaxweight configurations of 750 kB and 3 MB respectively.
hero member
Activity: 818
Merit: 1006
I pushed a small change to 1mb_segwit that adds compatibility for pre-fork segwit2x mining with e.g. btc1. The 1mb_hardforked and lowmem branches do not need this change, as it's only the 1mb_segwit branch (and veqtrus's segwit PR) that have this issue.

The issue is that p2pool will refuse to mine if a softfork is detected which is not in the p2pool/network/(coinname).py:SOFTFORKS-REQUIRED list. It may be best to remove that check entirely, as it seems to be explicitly anti-forwards-compatible, which sounds like it may be a bad idea. Still thinking it over.

frodocooper, I'm now running a 1mb_segwit node and a lowmem node in parallel on the same machine. I'll keep an eye on memory consumption. If I can replicate it, I may look into the issue further and try to solve it if I can do so quickly.

My medium-term goal for fixing the p2pool CPU/RAM performance issues is to remove all transaction references (except the merkle root hash and maybe the coinbase transaction) from the consensus protocol and p2p layer. This will reduce RAM, CPU, and network usage by around 90% or more, but it will mean that p2pool's fast block propagation algorithm will no longer work. Given that p2pool's fast block propagation is no longer state-of-the-art, and has about 10x higher transmission time than the state of the art (Bitcoin FIBRE), and is slighly slower than the near-universal Compact Blocks and xthin protocols, and is about 100x more CPU intensive due to being written in Python, I do not think that the current p2pool fast block propagation tech is worth keeping in the codebase. If I get bored, I might try to replace the fast block propagation code with something else that runs outside of the p2pool consensus layer, maybe using bloom filters and/or IBLTs, but I think the current version is causing more harm than good.

If I remove p2pool's fast block prop code, I might add a check at startup (for the bitcoin and bitcoincash networks only) to make sure that bitcoind has some sort of fast block prop set up, whether it's xthin, CB, Falcon, or FIBRE, and make p2pool not mine at all unless one is present or unless a command-line override flag is set. This would be to prevent lazy or incompetent p2pool sysadmins from driving up p2pool's orphan rate.
sr. member
Activity: 351
Merit: 410
On a side note, my 1mb_segwit node seems to be using approximately twice the amount of memory as my old lowmem node did.

Memory consumption on my 1mb_segwit node was recorded as 1.38 GB approximately one and a half hours after the node was started up for the first time, roughly five days ago. The latest record (just over one and a half hours ago) marks memory consumption as 2.18 GB. Memory consumption seems to grow at an average rate of roughly 200 MB per day.

Memory consumption on my old lowmem node never exceeded 1.5 GB, even after more than seven days of continuous running.

This is on an Amazon EC2 c4.large instance, which has 3.75 GB of total available memory. Bitcoin Core 0.14.2 is configured to use 700 MB of memory for its UTXO cache (it used to be 1 GB when I was running lowmem) and the default 300 MB for its mempool. This leaves less than 2.75 GB for my 1mb_segwit node to use, and it is already fast reaching that mark.

The blockmaxsize and blockmaxweight parameters have been set to 1000000 and 4000000 respectively.

Also, whereas efficiency on my old lowmem node would hover around the 90% mark, efficiency on my 1mb_segwit node hovers around the 80% mark, with a noticeably higher mean DOA rate (10%-20% on lowmem and as high as 30% on 1mb_segwit).

GetBlockTemplate latency is also noticeably higher on 1mb_segwit. Mean GBTL would settle at just north of 0.4 s on lowmem, while mean GBTL hovers at around 0.55 s on 1mb_segwit.
hero member
Activity: 818
Merit: 1006
We just added another 0.4 PH/s to jtoomimnet. That makes about 1.8 PH/s total.
hero member
Activity: 818
Merit: 1006
sawa, can you try commenting out lines 350 through 357 in p2pool/work.py on your emerald test server and see if that helps? It's possible that target = min(target, ...) line is doing something strange on alts. The code you should comment out to test is this:

Code:
           else:
                # If we don't yet have an estimated node hashrate, then we still need to not undershoot the difficulty.
                # Otherwise, we might get 1 PH/s of hashrate on difficulty settings appropriate for 1 GH/s.
                # 1/100th the difficulty of a full share should be a reasonable upper bound. That way, if
                # one node has the whole p2pool hashrate, it will still only need to process one pseudoshare
                # every 0.3 seconds.
                target = min(target, 1000 * bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(
                    self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy))

It's probably something else, though. I'll probably not have time to look deeply into this until the weekend.
hero member
Activity: 818
Merit: 1006
is the windows version of the new fork going to be released anytime soon?
You mean binaries? I am a bit too busy to build binaries for non-release beta versions. You can still run from source on Windows, though.

Quote
If I am running the old 16.0 version of p2pool, am I still going to be part of any blocks that are discovered by the new fork? If so why then is the calculation of the rewards different on each separated network?
No.

Quote
It would appear beneficial to the cause of p2pool miners if we combine the hashing power between both networks just so we can actually get closer to finding a block
Agreed. Before we can merge the two pools, we need to get the 1mb_segwit branch tested on testnet and the bugs that prevent it from working with altcoins fixed. Can you help with that?
Pages:
Jump to: