Pages:
Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 84. (Read 2591919 times)

newbie
Activity: 5
Merit: 0
In Russia we built three nodes. The overall capacity of 400TH/s
By the end of June. We plan to raise up to 500-700TH/s.
hero member
Activity: 818
Merit: 1006
I have switched my nodes over to p2pool v16. The current hashrate is approximately 97% v16 over the last hour, and about 76% v16 over the last day. This means that anyone still on v15 will soon have their shares ignored by the rest of the network.
newbie
Activity: 5
Merit: 0
You mean big enough like you currently are? Smiley
Unfortunately, yes.

On that note, I've been meaning to ask other p2poolers what I should do about that, if anything. My nodes currently comprise around 46% of the p2pool hashrate. This means I could perform selfish mining attacks if I wanted to, and if I boosted my hashrate a bit, I could 51% attack the p2pool network and get 100% of the shares. Mostly, this happened because p2pool shrank from 2 PH/s to 0.8 PH/s, not because we grew much. One way of dealing with this issue is that I could take some of my hashrate off of p2pool (and maybe solo mine instead), but that would make p2pool blocks even rarer. (I think my nodes have found the last 4 p2pool blocks.) I could also maybe spread some of my hashrate among other nodes, although this would reduce our revenue. Or we could get more people to mine on p2pool, somehow.

Hello! I have p2pool at 10% of capacity throughout its network. And I'm not going to leave him!!! And all that I acquire. I will be adding to p2pool
Since the wire is not one experience comparing pool Kano and p2pool.
I can safely say that those who believe in the fairy tale about the bad pool. Continue to believe in. Many people do not know how to count money. I do not understand how people do not understand this..
Here is a link https://forum.bits.media/index.php?/topic/253-p2pool-detcentralizovannyi-pul/?p=441341, Google translator to help.

The hosts of other pools, well able to hang noodles on the ears!

p/s My English is at the level of Google. So I apologize.
hero member
Activity: 818
Merit: 1006
Bitcoin Classic 1.1.0 is out. It supports CSV and in my limited testing works fine with p2pool v16.
hero member
Activity: 818
Merit: 1006
Is anyone else having trouble with peer acquisition? vps.forre.st seems to be down.
legendary
Activity: 1308
Merit: 1011
You mean big enough like you currently are? Smiley
Unfortunately, yes.

On that note, I've been meaning to ask other p2poolers what I should do about that, if anything. My nodes currently comprise around 46% of the p2pool hashrate. This means I could perform selfish mining attacks if I wanted to, and if I boosted my hashrate a bit, I could 51% attack the p2pool network and get 100% of the shares. Mostly, this happened because p2pool shrank from 2 PH/s to 0.8 PH/s, not because we grew much. One way of dealing with this issue is that I could take some of my hashrate off of p2pool (and maybe solo mine instead), but that would make p2pool blocks even rarer. (I think my nodes have found the last 4 p2pool blocks.) I could also maybe spread some of my hashrate among other nodes, although this would reduce our revenue. Or we could get more people to mine on p2pool, somehow.
I think, you don't have to do anything. Neither go to other nodes nor remove your hashrates from p2pool. Everyone who wanted to leave have already gone on ckpool and further reduction of hashrate is unlikely. There left those who will never remove their powers from p2pool, and, most likely, will attract other people to join here.
hero member
Activity: 818
Merit: 1006
Unfortunately, Bitcoin Classic has not been updated to support CSV and BIP9 yet.
BitcoinXT release F has been tagged. It has BIP9, BIP68, BIP112, and BIP113 support, plus Xthin blocks. As such, it should work fine even after the CSV fork is activated, and should be compatible with p2pool version 16.

https://www.reddit.com/r/bitcoinxt/comments/4p7pl3/bitcoin_xt_release_f_has_been_tagged/

Keep in mind that this is based on the 0.11 branch, so performance may be sub-par.
hero member
Activity: 818
Merit: 1006
You mean big enough like you currently are? Smiley
Unfortunately, yes.

On that note, I've been meaning to ask other p2poolers what I should do about that, if anything. My nodes currently comprise around 46% of the p2pool hashrate. This means I could perform selfish mining attacks if I wanted to, and if I boosted my hashrate a bit, I could 51% attack the p2pool network and get 100% of the shares. Mostly, this happened because p2pool shrank from 2 PH/s to 0.8 PH/s, not because we grew much. One way of dealing with this issue is that I could take some of my hashrate off of p2pool (and maybe solo mine instead), but that would make p2pool blocks even rarer. (I think my nodes have found the last 4 p2pool blocks.) I could also maybe spread some of my hashrate among other nodes, although this would reduce our revenue. Or we could get more people to mine on p2pool, somehow.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
If your payout system does indeed pay a 'relatively' constant N diff shares per block, then it at least doesn't have the obvious Prop issue.
My simulation for the naive PPLNS-on-DAG system gave about 1% typical variation in payouts when the number of shares per level of the DAG varied uniformly between 1 and 10. Note that the variation is not predictable: you can't guess how many shares per level will be mined in the future unless you're actively manipulating it (and large enough to do so). For the PPLKD system, your reward will be split over a constant amount of other people's work, so the variation due to changes in M should be basically 0%.
You mean big enough like you currently are? Smiley
hero member
Activity: 818
Merit: 1006
If the "expected reward" per 1 diff share falls over time since the last block, then it is hoppable.
No, we're talking about expected reward given to you by someone else's share. This is a p2pool conversation, not a traditional pool conversation. You have a probability of other people rewarding you in the shares they find after you find your own share. If there are more peer shares that are supposed to reward you, then the size of the reward that each share would give you (should that share also be a block) will be less. The point I have been trying to make is that the expected reward you get for mining a 1-diff share is not dependent on the number of peer shares that your reward is distributed over.

If your payout system does indeed pay a 'relatively' constant N diff shares per block, then it at least doesn't have the obvious Prop issue.
My simulation for the naive PPLNS-on-DAG system gave about 1% typical variation in payouts when the number of shares per level of the DAG varied uniformly between 1 and 10. Note that the variation is not predictable: you can't guess how many shares per level will be mined in the future unless you're actively manipulating it (and large enough to do so). For the PPLKD system, your reward will be split over a constant amount of other people's work, so the variation due to changes in M should be basically 0%.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
If the "expected reward" per 1 diff share falls over time since the last block, then it is hoppable.
If you understand why Prop is hoppable then the above statement should be obvious.

PPLNS has a 'relatively' constant N diff shares, thus the "expected value" of any 1 diff share at any particular time, after the previous block, does not change.
(this of course isn't true over a diff change, but that's out of the context of the issue being brought up)

I can't make head nor tail of your description, but the above is what's relevant, and I imagine is what smooth is saying.
If your payout system does indeed pay a 'relatively' constant N diff shares per block, then it at least doesn't have the obvious Prop issue.
hero member
Activity: 818
Merit: 1006
I already explained that the expected value of the next hash attempt is lower if M is higher and if payouts are divided equally* between all accepted shares a depth N (M, a variable).
As far as I can tell from what you've written, you're objecting that the amount of payout for you per peer's share decreases with the number of shares that go into the last N levels of the DAG. This is true, and is not a problem. What I'm not sure about is whether you're claiming that the sum of the expected payouts for you over all of the peers' shares in the last N levels will be lower if that number of shares decreases. If that's not your claim, then the only other claim that I think you might be making is that the sum of expected payouts is perturbed not by the static level of M/N, but by the rate of change of M/N. Is your objection one of those two? If so, which?

This should be obvious given that the block reward is (approximately, ignoring tx fees) a constant and M is not, but apparently it is not obvious.
The size of each block reward is a constant, but the expected number of block rewards per N levels is not. The number of block rewards per N levels is proportional to M.
legendary
Activity: 2968
Merit: 1198
Either way, I'm pretty sure you are incorrect
If you can specify how you think the system is flawed, I'm all ears. However, simply stating that you think it's flawed without going into any detail or even demonstrating that you understand the system is not very helpful.

I already explained that the expected value of the next hash attempt is lower if M is higher and if payouts are divided equally* between all accepted shares a depth N (M, a variable). This should be obvious given that the block reward is (approximately, ignoring tx fees) a constant and M is not, but apparently it is not obvious.

If I have misstated something in the above paragraph please let me know because it is still possible I misunderstand your proposal.

* assuming constant difficulty.
legendary
Activity: 1308
Merit: 1011
Thanks Forrest!
In version 16.0 an issue #253 in work with pypy disappeared.
This version works correctly and has the fastest speed.
full member
Activity: 255
Merit: 102
uBlock.it Admin
Unfortunately, Bitcoin Classic has not been updated to support CSV and BIP9 yet. Consequently, it will be inadvisable to mine with it after the CSV fork activates. If you try, you will probably have any blocks that you mine get orphaned. This will make other p2pool users unhappy, as you will basically be freeloading off the others in the pool. So please don't.

You can still vote for the Classic BIP109 2 MB hardfork without running Classic. You can do so quite easily in p2pool by editing the source code to bitwise OR the block version with 0x30000000. I will be doing this myself soon. Something like this should work:

    version=max(self.current_work.value['version'] | 0x30000000, 0x30000000),

If the BIP109 vote exceeds about 10-15% of the network hashrate, or if a large miner (e.g. Bitfury or Bitmain) asks me to, I will take a week and merge in CSV, BIP9, and whatever else needs merging into Classic, if Gavin and Zander haven't beaten me to it. In the mean time, I will keep voting for BIP109 but using Core, similar to what f2pool does.

Makes sense, hopefully Classic will be updated soon. I wasn't looking forward to re-building bitcoind but looks like it's necessary.
Thanks!
legendary
Activity: 1258
Merit: 1027
P2Pool release 16.0 - commit hash: d3bbd6df33ccedfc15cf941e7ebc6baf49567f97

HARDFORK - Upgrade URGENTLY required in the next few days.

Windows binary: http://u.forre.st/u/wanckfqm/p2pool_win32_16.0.zip
Windows binary signature: http://u.forre.st/u/wqjnuihh/p2pool_win32_16.0.zip.sig
Source zipball: https://github.com/forrestv/p2pool/zipball/16.0
Source tarball: https://github.com/forrestv/p2pool/tarball/16.0

Changes:
* CSV compatibility
* Requires Bitcoin >=0.12.1

Several BIPs will take effect in the next few days and in order for P2Pool to continue working without producing invalid blocks, everyone needs to upgrade. At 50% of our hashrate upgrading, P2Pool instances will start displaying a warning saying that an upgrade is required. Reaching that point as quickly as possible is very important. And then, at 95%, users that have not upgraded will be excluded. If non-upgraded users aren't excluded before the Bitcoin changes takes effect, P2Pool users will be subject to paying other users for invalid work - effectively a withholding attack.

So, please upgrade to 16.0 now and also tell everyone else to.

Thanks Forrest!

CoinCadence is now running P2Pool 16
hero member
Activity: 818
Merit: 1006
Unfortunately, Bitcoin Classic has not been updated to support CSV and BIP9 yet. Consequently, it will be inadvisable to mine with it after the CSV fork activates. If you try, you will probably have any blocks that you mine get orphaned. This will make other p2pool users unhappy, as you will basically be freeloading off the others in the pool. So please don't.

You can still vote for the Classic BIP109 2 MB hardfork without running Classic. You can do so quite easily in p2pool by editing the source code to bitwise OR the block version with 0x30000000. I will be doing this myself soon. Something like this should work:

    version=max(self.current_work.value['version'] | 0x30000000, 0x30000000),

If the BIP109 vote exceeds about 10-15% of the network hashrate, or if a large miner (e.g. Bitfury or Bitmain) asks me to, I will take a week and merge in CSV, BIP9, and whatever else needs merging into Classic, if Gavin and Zander haven't beaten me to it. In the mean time, I will keep voting for BIP109 but using Core, similar to what f2pool does.
member
Activity: 107
Merit: 10
Classic isn't supported because it doesn't support the CSV softfork yet. Considering CSV will almost certainly reach 95% hashpower signaling before the difficulty retargets, CSV will be enforced from the start of the retargeting period after the next (in about two weeks). Unless Classic adopts CSV miners using it may have their blocks rejected.

Note that simply changing the share version to get them accepted by the P2Pool network is an attack after CSV is enforced.
full member
Activity: 255
Merit: 102
uBlock.it Admin
i'm running Classic 0.12.1 and i get
coin daemon too old! Upgrade!
unless i replace the helper.py from version 15.
Was this tested with Classic?
I don't believe Classic is signaling CSV or BIP 9, they just removed the upgrade warning from the code in 0.12.1.
hero member
Activity: 516
Merit: 643
I suppose i'll just dirty up the code a bit . . .

It should run fine with Bitcoin Classic...

EDIT: I don't really know. I assumed Classic would support and indicate that it provides support for the upcoming softforks, but I don't know.
Pages:
Jump to: