Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 56. (Read 2591916 times)

newbie
Activity: 66
Merit: 0
good analysis?
this is open data.
sr. member
Activity: 373
Merit: 250
Any ideas if its rentals that have taken the hash rate up?
Just thinking if rentals, when those rentals end or how long they can keep it up.
It looked to me like it was rentals. The current burst of hashrate started on Mar 29th right after I mined the first block in like 3 weeks. I know there was at least one entity who set up an automatic nicehash p2pool mining scheme with automatic reinvestment of p2pool rewards, and the timing suggests to me that this was something similar.

Personally, I'm hoping that my new fork can attract permanent contributors. I think I've been able to fix some of the many bugs and issues with p2pool, and there are lots more optimizations that I've thought of but haven't had time to implement yet. I think we can get p2pool to scale up quite a bit and still give its users better revenue than they would get with most traditional pools. I may have to write some share propagation code specifically with the GFW in mind, though (e.g. UDP, weak shares, bloom filters to replace remember_tx and forget_tx, maybe IBLTs, and share-headers-first mining). Even without those things, revenue will probably be better in China than the main Chinese pools (which all have large fees). But we'll see.

My company will also be adding quite a bit of hashrate over the next few months. I'm expecting to add 390 TH/s today alone, for example (30 S9s are on their way). We'll probably add a petahash or so by July. All of that hashrate will be going onto the new fork, of course.

Edit: Yes, it's Nicehash. The p2pool user 1Hinenj9woDM8wLDffov4PUwFqqtRg5FRu sends regular payments of 2 to 10 btc to 3FNmBkdxorsjf7PFBf4d5JEYBybMicbDin about two times a week, adding funds from 19XLjEswgycrtbX1TcCJ8grw8zZYvbNaG6 as necessary. The 3FN address belongs to Nicehash, and makes regular payouts to Nicehash customers (examples 1 and 2). The 19XL address first sent money to Nicehash (2 BTC via 3FN) on November 6th, 2016. 19XL appears to be a user of Poloniex, as he frequently gets some money from Poloniex's 2nd-biggest wallet 17A16QmavnUfCW11DAApiJxp7ARnxN5pGX (about 0.2 btc every 2 days).

The p2pool user 1NVoAeLBm3QLRCcezvwUzKYHDLU96dji8x is directly a customer of nicehash, and sells about 18 kSol/s of Equihash hashpower there (about 0.1 btc/day).(We can't see what he buys. 1NV has sent regular payments of 1.2 to 1.5 btc to 32DdAy7oDoovdxt6egJ2Zoip1sQf25pZs3, which is another Nicehash deposit address.

1Hinen and 1NV have made a ton of money from the recent luck, and has about 30 btc and 6 btc respectively saved up, but they're spending it at a rate of about 20 btc per week. If I'm right and these deposits are being made by a bot, then he can keep it up for about 1.5 weeks with 0% mining luck. If luck remains close to 100% (it has been over 200% recently), then he should be able to hold it up for quite a while, although I presume that Nicehash's fees will run down his balance eventually. That could be a month or two away, though.
good analysis! i also see strong evidence that somebody is pushing p2pool with nicehash
i really appreciate it. so we have a really strong p2pool and get frequent blocks.

but one thing is not so clever - and this goes out especially from Fred to 1NPQQFTE2Loa6YyFiEFiNB2U6Pf58DCk3e aka 1MeD8aT5GAtZVsU5WqpRx8jyXcsYbG437z ( the person will know ): you exaggerate it way too much! ( this is also bad for your own profit )
full member
Activity: 196
Merit: 100
Any ideas if its rentals that have taken the hash rate up?
Just thinking if rentals, when those rentals end or how long they can keep it up.
It looked to me like it was rentals. The current burst of hashrate started on Mar 29th right after I mined the first block in like 3 weeks. I know there was at least one entity who set up an automatic nicehash p2pool mining scheme with automatic reinvestment of p2pool rewards, and the timing suggests to me that this was something similar.

Personally, I'm hoping that my new fork can attract permanent contributors. I think I've been able to fix some of the many bugs and issues with p2pool, and there are lots more optimizations that I've thought of but haven't had time to implement yet. I think we can get p2pool to scale up quite a bit and still give its users better revenue than they would get with most traditional pools. I may have to write some share propagation code specifically with the GFW in mind, though (e.g. UDP, weak shares, bloom filters to replace remember_tx and forget_tx, maybe IBLTs, and share-headers-first mining). Even without those things, revenue will probably be better in China than the main Chinese pools (which all have large fees). But we'll see.

Thanks for the reply. Thinking this could be publicp2poolnode who was experimenting with rentals some time ago.

Ive been really busy at work for a few weeks and been out in Spain for a week so just catching up.
Ive barely been able to keep an eye on my node and miners with limited internet.
Ive seen posts about your fork but not got to grips with it yet.

Anyway Ive taken a gamble today and thrown 0.6 Ph of rentals onto my node.
The node is handling it really well which is good as a test for my node stability.
If the run keeps up im hoping to make some kind of gain myself.
Given the current hash rate something should happen now varience is a bit lower.
Be a winner if we drop a double block again like yesterday but dont want anything to soon. Need those shares up in the chain a bit first.
newbie
Activity: 66
Merit: 0
hero member
Activity: 818
Merit: 1006
Any ideas if its rentals that have taken the hash rate up?
Just thinking if rentals, when those rentals end or how long they can keep it up.
It looked to me like it was rentals. The current burst of hashrate started on Mar 29th right after I mined the first block in like 3 weeks. I know there was at least one entity who set up an automatic nicehash p2pool mining scheme with automatic reinvestment of p2pool rewards, and the timing suggests to me that this was something similar.

Personally, I'm hoping that my new fork can attract permanent contributors. I think I've been able to fix some of the many bugs and issues with p2pool, and there are lots more optimizations that I've thought of but haven't had time to implement yet. I think we can get p2pool to scale up quite a bit and still give its users better revenue than they would get with most traditional pools. I may have to write some share propagation code specifically with the GFW in mind, though (e.g. UDP, weak shares, bloom filters to replace remember_tx and forget_tx, maybe IBLTs, and share-headers-first mining). Even without those things, revenue will probably be better in China than the main Chinese pools (which all have large fees). But we'll see.

My company will also be adding quite a bit of hashrate over the next few months. I'm expecting to add 390 TH/s today alone, for example (30 S9s are on their way). We'll probably add a petahash or so by July. All of that hashrate will be going onto the new fork, of course.

Edit: Yes, it's Nicehash. The p2pool user 1Hinenj9woDM8wLDffov4PUwFqqtRg5FRu sends regular payments of 2 to 10 btc to 3FNmBkdxorsjf7PFBf4d5JEYBybMicbDin about two times a week, adding funds from 19XLjEswgycrtbX1TcCJ8grw8zZYvbNaG6 as necessary. The 3FN address belongs to Nicehash, and makes regular payouts to Nicehash customers (examples 1 and 2). The 19XL address first sent money to Nicehash (2 BTC via 3FN) on November 6th, 2016. 19XL appears to be a user of Poloniex, as he frequently gets some money from Poloniex's 2nd-biggest wallet 17A16QmavnUfCW11DAApiJxp7ARnxN5pGX (about 0.2 btc every 2 days).

The p2pool user 1NVoAeLBm3QLRCcezvwUzKYHDLU96dji8x is directly a customer of nicehash, and sells about 18 kSol/s of Equihash hashpower there (about 0.1 btc/day). We can't see via Nicehash's public UI what he buys, but 1NV has sent regular payments of 1.2 to 1.5 btc to 32DdAy7oDoovdxt6egJ2Zoip1sQf25pZs3, which is another Nicehash deposit address.

1Hinen and 1NV have made a ton of money from the recent luck, and have about 30 btc and 6 btc respectively saved up, but they're spending it at a rate of about 20 btc per week. If I'm right and these deposits are being made by a bot, then he can keep it up for about 1.5 weeks with 0% mining luck. If luck remains close to 100% (it has been over 200% recently), then he should be able to hold it up for quite a while, although I presume that Nicehash's fees will run down his balance eventually. That could be a month or two away, though.
full member
Activity: 196
Merit: 100
Have not seen a block party like this in a few years Smiley

Things are going great!
This kind of hash power should hopefully attract some other permanent miners to join up as well.

Any ideas if its rentals that have taken the hash rate up?
Just thinking if rentals, when those rentals end or how long they can keep it up.
hero member
Activity: 818
Merit: 1006
I wanted to try to connect to your node http://ml.toom.im:9334/, but unfortunately for some reason it did not work. I will wait for the completed code.
leri4 and I talked a little by PM. His issue appears to be DNS related. If you have trouble connecting, try using the IP address, 208.84.223.121.
hero member
Activity: 818
Merit: 1006
It's probably not a trivial "Let's get rid of the clipping!" thing
I treated the problem as a trivial "Let's get rid of the clipping!" thing, and it's working much better now. Share rates equilibrated at around 30 seconds per share after an hour or so, versus the several days at around 2 shares per hour that I was dealing with before. That will make testing and development much easier. Soon, I'll add rejection of shares with timestamps in the future to protect against difficulty manipulation DoS attempts.

I think I've found a good way to solve the replay protection thing and another issue at the same time. I'll check to make sure it will work before mentioning it. It's a little complicated of a solution, but it should be worth it for the side benefits: if I'm right, it will more fairly reward people for transaction fees while also making sure that my chain can't accidentally kill the old chain.
legendary
Activity: 1512
Merit: 1012
Have not seen a block party like this in a few years Smiley

And this is good, we attract more miners, too ...

hero member
Activity: 818
Merit: 1006
I ran into another bug. This one could be a fatal problem to p2pool if the network hashrate ever falls abruptly and severely (e.g. more than 10x), as it did when I switched over to my new fork. The problem stems from this code:

Code:
           timestamp=math.clip(desired_timestamp, (
                (previous_share.timestamp + net.SHARE_PERIOD) - (net.SHARE_PERIOD - 1), # = previous_share.timestamp + 1
                (previous_share.timestamp + net.SHARE_PERIOD) + (net.SHARE_PERIOD - 1),
            )) if previous_share is not None else desired_timestamp,

Basically, the share timestamp is allowed to increase by no more than 61 seconds per share. If the hashrate suddenly falls by more than 2x, this means that the average share will take more than 61 seconds of real time and will have its timestamps clipped. Ultimately, this can result in anomalous minimum share difficulty calculations.

What happens is that you get a time backlog, and all shares have timestamps 61 seconds after the previous one regardless of how long they actually took. This means that the next share will have lower difficulty than the previous one no matter what, even if it actually took 1 second. As each share has lower difficulty, but still apparently takes 61 seconds, the estimated pool hashrate drops even further, which causes the difficulty to drop exponentially, until p2pool can no longer process shares as fast as they're submitted. Then everything goes to hell and you start getting 100 DOAs for every valid share. In my case, it seems to have resulted in share difficulties around 600 before things crashed, which meant several shares were being found every second. For comparison, the current difficulty on the legacy chain (the one you guys are all using) is around 4.8 million.

Since we currently have a single miner with about 3/4 of the network hashrate, if that miner chose to leave p2pool all at once, that might be enough to trigger this bug, although probably not as severely as I did.

I'll have to think a bit about how to fix this. It's probably not a trivial "Let's get rid of the clipping!" thing, since the clipping should be there to protect against malicious miners manipulating the timestamps and consequently the share difficulty. I think using something like Bitcoin's rule of not accepting shares more than 2 hours in the future could be good, but that will require that everybody have reasonably accurate clocks. Which might be worth requiring anyway.

Edit: Yeah, I think the right thing to do is probably to simply remove the clipping and also to reject any shares that are timestamped more than maybe 300 seconds in the future. Anyone who has an incorrect clock setting will either see a lot of error messages from rejecting others' shares plus a high share orphan rate if their clock is slow, or simply a high orphan rate if their clock is fast. As mining is basically a timestamping operation, there's a strong case to be made that keeping accurate clocks is a miner's duty.

The death spiral in graphs:



Note how the network traffic increases when the estimated hashrate hits zero, and how peers become unable to maintain a connection.

In order to recover from this bug, I had to restore a backup of the share chain that I made yesterday, so I lost about a day's worth of hashing at around 700 TH/s. Sad
hero member
Activity: 496
Merit: 500
You bet
I have never encountered any blocks in person but this certainly increases my confidence in p2pool  Cheesy
legendary
Activity: 1258
Merit: 1027
Have not seen a block party like this in a few years Smiley
legendary
Activity: 1308
Merit: 1011
I tracked down and (in my working branch) fixed a bug that's worth mentioning. When assigning new stratum jobs to miners, p2pool will guess what difficulty to use (for pseudoshares) based on the node's hashrate. If the node just started, it doesn't know what hashrate to use, and often ends up assigning insanely low difficulty, usually around 4 GH per share. If a substantial amount of miners connect to the node quickly after starting up, the node can get flooded with thousands of shares per second, which will either saturate the CPU or (if you've got less than about 100 kB/s of bandwidth) the network. This can be avoided by making p2pool use the default share difficulty (which is based on the share chain) divided by e.g. 1000 until the node has an estimate for its own hashrate.

When this bug occurs, it looks like this:

Code:
2017-04-14 21:30:07.152590 > Traceback (most recent call last):
2017-04-14 21:30:07.152619 >   File "p2pool/util/variable.py", line 74, in set
2017-04-14 21:30:07.152647 >     self.changed.happened(value)
2017-04-14 21:30:07.152664 >   File "p2pool/util/variable.py", line 42, in happened
2017-04-14 21:30:07.152681 >     func(*event)
2017-04-14 21:30:07.152698 >   File "p2pool/work.py", line 130, in
2017-04-14 21:30:07.152715 >     self.node.best_share_var.changed.watch(lambda _: self.new_work_event.happened())
2017-04-14 21:30:07.152732 >   File "p2pool/util/variable.py", line 42, in happened
2017-04-14 21:30:07.152759 >     func(*event)
2017-04-14 21:30:07.152785 > --- ---
2017-04-14 21:30:07.152801 >   File "p2pool/bitcoin/stratum.py", line 38, in _send_work
2017-04-14 21:30:07.152817 >     x, got_response = self.wb.get_work(*self.wb.preprocess_request('' if self.username is None else self.username))
2017-04-14 21:30:07.152835 >   File "p2pool/work.py", line 212, in preprocess_request
2017-04-14 21:30:07.152852 >     raise jsonrpc.Error_for_code(-12345)(u'lost contact with bitcoind')
2017-04-14 21:30:07.152868 > p2pool.util.jsonrpc.NarrowError: -12345 lost contact with bitcoind
...

A similar problem occurs not only if local_hash_rate is None
The same consequences will be when you mine on altcoin p2pool for a long time with a low hashrate and then add big power.

And you'll see 100% CPU usage and ~100 kB/s of traffic to your miners.
As a result, the miner who entered with a big power sees a huge DOA and disconnects.
http://crypto.office-on-the.net:12347/static/
legendary
Activity: 1512
Merit: 1012
Good month at 10PH/s ...  Smiley



newbie
Activity: 6
Merit: 0
I wanted to try to connect to your node http://ml.toom.im:9334/, but unfortunately for some reason it did not work.
If you want to view the stats, use http://. If you want to actually mine to it, you need to use stratum+tcp://ml.toom.im:9334. Does that help?

Unfortunately not, if I connected the miner directly, there is no connection, but if I connect through the ckproxy, the miner is connected and after a minute goes into reboot, and so constantly. I see it in the first.
http://uploads.ru/w3xzr.jpg

hero member
Activity: 818
Merit: 1006
I wanted to try to connect to your node http://ml.toom.im:9334/, but unfortunately for some reason it did not work.
If you want to view the stats, use http://. If you want to actually mine to it, you need to use stratum+tcp://ml.toom.im:9334. Does that help?
newbie
Activity: 6
Merit: 0
If you want to watch the mining on my new fork, the nodes running the new fork are these:

http://ml.toom.im:9332/
http://ml.toom.im:9334/
http://ml.toom.im:9336/

First of all, thank you very much for your work. I wanted to try to connect to your node http://ml.toom.im:9334/, but unfortunately for some reason it did not work. I will wait for the completed code.
hero member
Activity: 818
Merit: 1006
tl;dr: My fork is working, and you can review my code now, but it's better not to run it until I'm done making changes.


All of my nodes have been migrated to the 1mb_hardforked branch, and things appear to be working properly. The two nodes that I had mining over the last half day now have 22 and 41 shares mined for :9332 and :9336, respectively, with 0 orphans between them. (Given, that's on localhost, and the share interval is currently much longer than normal due to the reduced network hashrate on my fork, but it's still promising.)

The current code has one known deficiency: shares on my branch are still considered valid by nodes not on my branch. If my branch overtakes the legacy branch in terms of total work performed, and if these shares are somehow propagated to nodes on the old branch (e.g. via a modified client, or via saved shares in the p2pool/data/bitcoin directory that are loaded on startup after switching code versions), then they will wipe out the old branch's work. I haven't yet added protection against this scenario, but I will soon. Until I add this replay protection, I would prefer that nobody else runs my code so that it's easier for me to make those modifications.

However, if you guys want to start reviewing my almost-complete code and commenting on the commits, you can do so here:

https://github.com/jtoomim/p2pool/commits/1mb_hardforked

If you want to watch the mining on my new fork, the nodes running the new fork are these:

http://ml.toom.im:9332/
http://ml.toom.im:9334/
http://ml.toom.im:9336/

If you wish to mine on the new fork and don't want to wait for me to finalize the code, you may hash to stratum+tcp://ml.toom.im:9332 or :9334. (I prefer to keep :9336 for use on our LAN only, and may close that port in the future.) Keep in mind that any shares mined on this new fork will not be recognized by any other p2pool nodes yet, and will only have a value when blocks get mined on my fork. That said, I personally do not intend to switch my nodes back to the old branch, so it's likely that blocks will get mined.
hero member
Activity: 818
Merit: 1006
Progress update:
I'm getting the following error some of the time on my new fork when my mining node attempts to broadcast a just-mined share to its peers:
Code:
2017-04-14>     assert tx_hash in known_txs, 'tried to broadcast share without knowing all its new transactions'
...

I found the bug. It's a bug that was present before my edits, but rarely triggered without my changes.

P2pool keeps in memory a dictionary of known transactions, and evicts transactions from that dict when they're believed to no longer be needed. The transactions for a share must be in the dict in order for p2pool to be able to broadcast the share. P2pool will keep transactions in that dict if the transaction is present in the block template most recently received from bitcoind. However, the stratum jobs that miners are working on may be significantly older than the most recent block template. When a new block template arrives, p2pool will evict any transactions that are not in the new template within 10 seconds. These evictions will usually be the low-fee transactions at the end of the block, and the old version of p2pool only occasionally assigns those transactions to miners. Looking at the log files from one of my non-upgraded nodes, this happened 4 times out of its last 15 shares, causing a 26% increase in orphan rate over that interval. However, my fork assigns those transactions to miners pretty much every time, so this bug had a much larger impact on my node, causing about ~80% of my shares to fail to propagate.

I have a fix in my code now and am currently testing it on a single node. If it works, I'll start cleaning up my code and testing it with multiple nodes mining simultaneously. If that goes well, I'll publish my code and encourage other people to test it or switch permanently.
hero member
Activity: 818
Merit: 1006
Kano, p2pool code needs to support altcoins as well. There's no hardcoded value that will work for all alts and also Bitcoin.
Pages:
Jump to: