Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 22. (Read 2591916 times)

hero member
Activity: 818
Merit: 1006
htle0006, we've had influxes of 15 PH/s from Nicehash renters in the past, and that worked okay. Mining from Nicehash is not recommended with p2pool, since the high latency added by Nicehash will increase DOA rates for whomever is doing the Nicehash mining. I know of some changes I can make to the code to reduce this problem, but I haven't had the time to implement them yet.

If someone were to jump in with 150 PH/s of miners that they physically control, I think it would work fine. There is currently a mechanism in place that will increase the difficulty of shares submitted by a single node to try to target 1/30th of the pool's total shares from a single node. While the code does not achieve that goal, it does successfully increase the share difficulty dramatically for large miners. I haven't done the math for 150 PH/s specifically, but my intuition says that the total variance experienced by small miners would be much less with 150 PH/s total hashrate than with 1.5 PH/s total hashrate. Let me know if you want me to try to do the math to model it. It would probably take an hour or two to do that.

Right now, about 95% of the total reward variance comes from the infrequency with which p2pool finds a block. If we got to 150 PH/s, we'd find multiple blocks in a day, and that variance would go way down, but the variance due to the difficulty of finding a share would go up a bit. My intuition says that with 150 PH/s, variance for small miners would be down to about 20% of what it is currently.

In terms of performance scalability, the code should be able to handle 150 PH/s just fine, especially if it's only a small number of mining addresses. The major scalability problems in p2pool are currently mostly with transaction volume and node count. A minor scalability problem is the number of unique mining addresses using a single node. The number of actual pieces of mining hardware (e.g. S9s) on a single node is basically irrelevant, as handing out copies of a single cached stratum job to separate machines is effectively free. As share difficulty scales with hashrate, the number of shares found would also not be an issue.
sr. member
Activity: 351
Merit: 410
Anyway, Blue Bear, I made some changes a week ago that improves the CPU usage of jtoomimnet. It's possible that you might be able to use it on CPython now without pegging your CPU at 100%. I would be interested to know if that's the case or not. If it's not, then I have some other changes in mind that could help some more. If you're not too busy, could you give it a try and let me know how it goes?

https://github.com/jtoomim/p2pool/tree/1mb_segwit

I'm not Blue Bear, but for what it's worth, I'd like to say that commit f223000 managed to lower 1mb_segwit's memory requirements to the point where I'm now able to run my 1mb_segwit node with PyPy on a system with only 3.75 GB of RAM, and still have enough memory left to comfortably allocate 700 MB to Bitcoin Core's UTXO cache. In the last five days of continuous running, my 1mb_segwit node's memory consumption never exceeded 1.50 GB, mostly settling at the 1.45 GB mark. That's equivalent to my old lowmem node being run with CPython.

And what a difference PyPy makes. There has been a remarkable increase in performance across the board, most noticeable in the much lower orphan rate (3.5% on PyPy compared to roughly 7% on CPython) and in the roughly 0.2 s reduction in average GetBlockTemplate latencies. The only limiting factor in my P2Pool setup now is my AvalonMiner 741's. If it weren't for their sluggishness in keeping up with P2Pool's frequent work restarts, I'd most probably be enjoying low single-digit DOA percentages, instead of the 10%-15% I'm currently stuck with (although, to be fair, it is still a reasonably healthy 5% decrease when compared to running 1mb_segwit with CPython).
hero member
Activity: 818
Merit: 1006
Nobody is arguing for a centralized pool.

I want p2pool to be profitable enough that people will want to mine on it. I believe that if p2pool is more profitable than most pools and also easy to use, then we will get more people using it and will see an increase in node counts and miner counts. To that end, I have been working hard to improve expected revenue and fairness, and to reduce HW requirements by improving CPU, RAM, and (soonish) network efficiency.

Veqtrus wants p2pool to sacrifice profitability in order to reduce worst-case bandwidth requirements. He thinks that p2pool will work better if p2pool can be used (albeit with lower revenue) during adversarial conditions even on very slow internet connections, such as 1st gen ADSL.

During typical usage, my code uses about 18% more bandwidth than p2pool master (33.4 kB/s vs 28.3 kB/s, sum of UL and DL, averaged over the last week, from http://crypto.mine.nu:9332/static/classic/graphs.html?Week and http://crypto.mine.nu:9330/static/classic/graphs.html?Week), and gets about 10% to 20% more revenue. I think that's a pretty awesome tradeoff.

Anyway, Blue Bear, I made some changes a week ago that improves the CPU usage of jtoomimnet. It's possible that you might be able to use it on CPython now without pegging your CPU at 100%. I would be interested to know if that's the case or not. If it's not, then I have some other changes in mind that could help some more. If you're not too busy, could you give it a try and let me know how it goes?

https://github.com/jtoomim/p2pool/tree/1mb_segwit
newbie
Activity: 31
Merit: 0
As I understand it what makes p2pool DOS resistant is the fact that there are many nodes running. To be effective a DOS attack would require most of the nodes to be overwhelmed simultaneously. An attacker would have to determine the most effective nodes to attack to disrupt the network. If the network is adaptive enough it will reach out to link with other nodes that are not under attack maintaining integrity of the network. A Centralized Pool which only has one or two access points would be easily over come in a DOS attack because the attack can be concentrated to limited access points. The more spread out things are harder it is to disrupt. so arguing that consolidation is the best option is an faulty tactic as demonstrated by the Japanese at Pearl Harbor. If you both want to consolidate under your banners your both wacked. The more nodes running the better.

I think you are mostly arguing semantics.

Wake up and smell the coffee. If people here wanted to be in a centralized pool they would already be in one.

BB
newbie
Activity: 31
Merit: 0
at least you two are being polite ...
hero member
Activity: 818
Merit: 1006
By the way, I metered one of my bitcoind processes. With 8 peers, I'm getting around 2.2 kB/s up, 3.4 kB/s down. It might be possible to keep a Bitcoin full node synced with just a 56k modem. If anyone else is curious: sudo iftop -B -f "port 8333". That assumes all of your peers are on that port for either the source or the destination, as mine are. You can verify that manually with bitcoin-cli getpeerinfo. This is with btc1 1.14.5.
The network is actually surprisingly quiet at the moment with mempools actually draining close to empty. This is very different to what happens during spam transaction floods. Best to plan for the worst case scenario.
The numbers I collected appear to correspond to about 3 tps. The biggest spike I see on https://blockchain.info/charts/transactions-per-second?timespan=1year is 22 tps. Scaling linearly, that would predict 16 kB/s up, 25 kB/s down during the peak of a spam attack for bitcoind. Given that watching a 240p video on Youtube uses around 87.5 kB/s (or 250 kB/s for 480p), I really don't think it's significant. If your internet connection is fast enough to watch movies on Youtube, then you can run p2pool and bitcoind under adversarial conditions.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
By the way, I metered one of my bitcoind processes. With 8 peers, I'm getting around 2.2 kB/s up, 3.4 kB/s down. It might be possible to keep a Bitcoin full node synced with just a 56k modem. If anyone else is curious: sudo iftop -B -f "port 8333". That assumes all of your peers are on that port for either the source or the destination, as mine are. You can verify that manually with bitcoin-cli getpeerinfo. This is with btc1 1.14.5.
The network is actually surprisingly quiet at the moment with mempools actually draining close to empty. This is very different to what happens during spam transaction floods. Best to plan for the worst case scenario.
hero member
Activity: 818
Merit: 1006
I'm just trying to figure out what the min HW spec you envision is. You haven't said it yourself, so I'm reduced to guessing.

I've been clear about the HW min spec that I envision. My assumption is that almost nobody is going to try to run p2pool with < 1 Mbit/s of available bandwidth. My assumption is that anybody who tries to run p2pool will have at least 4 Mbit/s of downstream bandwidth and 1 Mbit/s of upstream bandwidth, and that it is okay to use up to 0.5 Mbit/s sustained in each direction during normal operation. My guess is that most users will have about 10x that amount of bandwidth available.

My conclusion is that it's not a good idea to cripple p2pool's revenue capability for everyone just in case someone wants to use p2pool without meeting that reasonable HW minimum.

By the way, I metered one of my bitcoind processes. With 8 peers, I'm getting around 2.2 kB/s up, 3.4 kB/s down. It might be possible to keep a Bitcoin full node synced with just a 56k modem. If anyone else is curious: sudo iftop -B -f "port 8333". That assumes all of your peers are on that port for either the source or the destination, as mine are. You can verify that manually with bitcoin-cli getpeerinfo. This is with btc1 1.14.5.

Quote
let alone using the internet for other purposes.
I don't think anybody should be trying to use the internet nowadays with ISDN or <1Mbps ADSL in the first place. 4 Mbps is about what I consider to be the min HW spec for browsing the internet. Since jtoomimnet p2pool would only use 4 Mbit/s during adversarial conditions for a second or two at a time, I don't think it would significantly hamper the user experience for browsing the web.
member
Activity: 107
Merit: 10
You are assuming that a user's connection will be dedicated to p2pool, where at least bitcoind is required for p2pool to even function, let alone using the internet for other purposes.
hero member
Activity: 818
Merit: 1006
veqtrus, is it your opinion that people should be able to mine on p2pool via ISDN?
No, 4 Mbps upload bandwidth is nowhere close to ISDN.
ISDN is 16 KB/s symmetric. The 50-100 kB share limit in adversarial conditions would be around 3-7 seconds on ISDN. Your advocacy of a 100 kB share limit seems to me like you're trying to protect people mining with ISDN, or maybe 1st-gen ADSL. You know, the speeds we had in 1999.

By the way, the relevant metric for adversarial conditions is download bandwidth, not upload.

You don't have to upload other people's shares in a timely fashion. You only need to upload your own shares quickly. Adversarial conditions don't apply to the shares you upload yourself. The existing code will send the transactions in a getblocktemplate request to peers as soon as you start mining on them. The peer will usually have received all of those transactions before you find the share, so you usually only have to upload the hashes for those transactions. For a share that adds 100 kB of transactions, that's expected to be around 8 kB of hashes. Again, ISDN spec.

However, an adversarial entity could modify their p2pool code to NOT send transactions out in advance. Doing so would substantially increase their orphan rate, of course, but let's say they want to do it anyway for selfish mining purposes. In this case, they could make a share with 100 kB of completely new transactions, and they would make you download that 100+8 kB ASAP so you could switch to mining on top of it. Until you get their share from them, your mining would be competing with theirs, and you would possibly end up orphaning their share (or they yours). That might take 650 ms if you have 1.5 Mbps ADSL, or 6.75 seconds if you have ISDN (suboptimal, but potentially tolerable for short periods).
member
Activity: 107
Merit: 10
With the way you're stubbornly blocking reasonable solutions from being implemented in mainnet P2Pool — solutions that arguably go a long way in solving many of P2Pool's teething problems, with benefits that far outweigh the risks, and where even kano (who has more to gain from mainnet P2Pool remaining in its current artificially anemic state, since he can all the more persuade current and potential mainnet P2Pool miners to mine with his pool instead) agrees that it would be for P2Pool's greater good — mainnet P2Pool has become your private pool, veqtrus.
I'm not in control of the main p2pool repo. ForrestV can merge what he wants. It is not my job to tell him to merge stuff I don't agree with.
member
Activity: 107
Merit: 10
veqtrus, is it your opinion that people should be able to mine on p2pool via ISDN?
No, 4 Mbps upload bandwidth is nowhere close to ISDN.
sr. member
Activity: 351
Merit: 410
I consider 4 Mbps to be a reasonable minimum HW spec for p2pool
Yes, this is indeed a reasonable spec for a private pool.

With the way you're stubbornly blocking reasonable solutions from being implemented in mainnet P2Pool — solutions that arguably go a long way in solving many of P2Pool's teething problems, with benefits that far outweigh the risks, and where even kano (who has more to gain from mainnet P2Pool remaining in its current artificially anemic state, since he can all the more persuade current and potential mainnet P2Pool miners to mine with his pool instead) agrees that it would be for P2Pool's greater good — mainnet P2Pool has become your private pool, veqtrus.
newbie
Activity: 10
Merit: 0
In C there only is this folder

https://en.bitcoin.it/wiki/Data_directory

bitcoin.conf must be in the blockchain directory where you has the blocks directory, too (local blockchain).

blockchain directory have wallet.dat, adresses.dat, banlist.dat, etcs ... and bitcoin.conf that you must create (not provide by the setup of Bitcoin Core).
I create the file in the principal folder i will check if this files are in a subfolder. Now i have to use another computer for 2 days or maybe for months  Cry so i have to redownload the blockchain and everything on the computer that i actually use. When i finish i will try again Smiley
hero member
Activity: 818
Merit: 1006
veqtrus, is it your opinion that people should be able to mine on p2pool via ISDN?

Even if that makes p2pool economically unviable for everybody who has something better than ISDN (who can get better profits on a private pool) as well as everyone who *does* have ISDN (who can get better profits with less bandwidth on a private pool)?
member
Activity: 107
Merit: 10
I consider 4 Mbps to be a reasonable minimum HW spec for p2pool
Yes, this is indeed a reasonable spec for a private pool.
hero member
Activity: 818
Merit: 1006
It seems you're using the word "share" to mean "portion", but in mining "share" means something different. A share is a data object used to prove that a miner has done some mining work. Mining for shares is the same as mining for blocks with the exception that blocks have a more difficult threshold for the hash. With p2pool right now, it is 196,426 times as difficult to find a Bitcoin block as it is to find a p2pool share.

Some of your hashpower isn't counted as part of the net (or "good") hashpower. Some of everyone else's hashpower isn't counted as part of the good hashpower either. Your percentage equals your good hashpower divided by the pool's total good hashpower.

If a share has been found by another p2pool miner and received by your p2pool node, then any hashes that your hardware does on the old share is considered DOA (dead on arrival) hashes. If you find a share at around the same time as another miner finds a share (both using the same parent share), then only one of those two shares can be included in the share chain, and the other becomes an orphan share. The share chain is what is used to calculate your total portion of the block reward. If there are 8640 shares in the share chain, and if they all have the same difficulty, and if you have 86 shares in the share chain at the moment a block was found, you would get 1% of the block reward.

On the other hand, the hashpower that is used for making blocks is not simply the good hashpower, but all hashpower. If a person finds a DOA share that also meets the Bitcoin block threshold, it still is a valid block even though it isn't a good share, and everyone in p2pool will get paid as a result of it. Same thing for orphan shares -- if a share meets the block difficulty threshold, it will still give payouts even if it doesn't make it into the share chain.
newbie
Activity: 34
Merit: 0



I want to know how I calculate my share based on the pool. From what I am understanding, if i am contributing 15gh/s and the pool has for example 60gh/s hashing power, i am paid based on the percentage of my computing power on the total pool. However I see this pool hashing power and "net" hashing power. I see that the net hashing power is bigger than the pool hashing power. Do i divide my hashing power in the pool's hashing power or the net hashing power?
hero member
Activity: 818
Merit: 1006
Both bitcoind and p2pool read the same configuration file. If you change the ~/.bitcoin/bitcoin.conf file and restart p2pool, then p2pool will be using the new credentials but bitcoind will still be using the old ones. Try restarting bitoind.
newbie
Activity: 1
Merit: 0
Could I have some help? Blockchain is sync'd, bitcoind is turned on, and this error occurs. (Ports are already open.) https://uwu.s-ul.eu/BOcxTgk8.png

Edit: Now I'm getting 401 Unauthorized. Tried to edit conf file. Here are contents:

Code:
server=1
rpcuser=meme
rpcpassword=meme
rpcallowip=192.168.*
Pages:
Jump to: