Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 200. (Read 2591920 times)

hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
pour more hash in & be luckier.

No it won't, It'll make it worse cos of the scalability problem. More hash power makes the problem worse, not better  Sad

No bounty = no development = no fix.

I actually pointed a few S3's at my node because the rise in BTC made it worth a gamble, but if the hash rate gets too high I'll pull them off again - it's a crazy situation  Tongue
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine
pour more hash in & be luckier.
legendary
Activity: 3164
Merit: 2258
I fix broken miners. And make holes in teeth :-)
Luck is very high this week ...  Cheesy
http://minefast.coincadence.com/p2pool-stats.php

It is, finding a block 1.5 hours after the previous when expected time was over a day is a nice boost Smiley

2015-03-10 09:11:03    346974 2,930.23%   
2015-03-10 07:40:31    346962 179.39%   
Yup. Normal variance. Luck is luck.

But it is nice. :-) Edit: We found another one? Oh yeah, this is good times. Remember it when we go a few days with no blocks, the manna will return.
legendary
Activity: 1258
Merit: 1027
Luck is very high this week ...  Cheesy
http://minefast.coincadence.com/p2pool-stats.php

It is, finding a block 1.5 hours after the previous when expected time was over a day is a nice boost Smiley

2015-03-10 09:11:03    346974 2,930.23%   
2015-03-10 07:40:31    346962 179.39%   
legendary
Activity: 1512
Merit: 1012
hero member
Activity: 532
Merit: 500
TaaS is a closed-end fund designated to blockchain
Just my mind dump:

In my mind, p2pool needs to see a block every two - three days to stay alive...  Beyond three days work is completely discarded.

I'm a noob, but correct me if I'm wrong.   There's a finite number of blocks discovered, and if P2Pool doesn't have the hash they will be solved by other pools... so "luck" is disproportionately lost as the hash goes down. 

Slush is running at 10-11PH, and yesterday with over 200% luck it found 8 blocks.  100% means 4 blocks a day at 10 PH/s.

so if we're running at 2.5PH/s at 100% luck.. 1 per day might be expected by may not be realistic as larger pools could steal them away before they are solved, no?

Watching rented hash or home grown hash fall off the share list after 3 days is a deal killer.   P2Pool will not continue to retain hash if multiple 4 & 5 day intervals occur on a regular basis.

Seems like P2Pool needs 2-3PH/s minimum  to be viable with the current software state and difficulty.

When I log in and see 1-1.5PH/s I think the 7 day delays are going to continue popping up.

Can someone tell me if my thinking is screwed up?

Well ANY pool needs to have at least 1% of total hashrate to be considered as a solution for the luck factor, today is 3.2 PH.

Having said that P2pool has the potential to become a quite big mining pool.

Require some more active development probably.

Regards
newbie
Activity: 58
Merit: 0
Just translating p2pool as it exists into C, C++, C#, Objective C, Swift, Java, Perl, Lisp, whatever, is not going to suddenly bring the mining masses here.  I know a number of us have listed out the problems previously, and a number of us have had discussions on potential solutions to those problems; however, nobody has been able to successfully crack the nut.  Maybe the problem is that we're all trying to solve p2pool's problems while keeping ourselves in the context of the existing structure.  Let's try another approach.  Forget p2pool even exists.

We as a community want a new pool to be created.  Here is a list of high-level bullet points we would want to have in our new pool:

  • Decentralized
  • Easy to use
  • Performant
  • Scalable
  • Inclusive

Please feel free to add to these points and/or provide more details/solutions.

This is exactly what I'm suggesting JB - a complete re-write. But in order to achieve this I think we should do the donation bounty option from within the current p2pool structure, as it's (I think) an excellent way of raising awareness in the community as well as raising funds, while the amount of work involved in making the changes are minimal & simple.

Nothing happening with this then I presume?  Sad

Stagnation is a killer.

I've been continuing to noodle on it.

Was at the MIT BTC expo this past weekend and was able to bring up the issues with some pretty smart folks.

The consensus seemed to lean toward addressing the payment threshold and scheme as the key issue, the rest can be addressed by a rewrite in a language like C++.

The challenge is to come up with a method to accumulate payouts for smaller miners within the pool, in a decentralized and trust-less way, so that they can be paid out at some threshold above what is considered dust.

The often proposed solution was to centralize the dust payments and payout from a trusted party. This is essentially what Nasty Pool has done, and its a great service for smaller miners, however the fact that it requires trust would take away a lot of what P2Pool has to offer, and I don't consider it a long term solution.

I'm in touch with a couple of the folks I met, including a dev who has the chops to pull it off (with an attractive bounty), however he is just as stuck as we are until we can come up with a solution for accumulating smaller payouts in a trust-less way...


Damn! I wanted to go to that conference and completely forgot when the day came.  Was hoping to meet some like minds and discuss p2pool... Shame I missed out.

Anyway, I have some programming experience and would be interested in assisting with development and fixes where I can.  Though I am just as stuck with actually coming up with a solution to these issues.  Here are some crazy idea...

What if the share-chain played less of an important role in deciding proof of work?  Currently something like 10% of the shares are discarded anyway, and sometimes those discarded shares are even valid bitcoin blocks!  While we would need the sharechain to keep track of the historical proof of work for the miners on the pool, we should find a way to have the pools sidechain play less of a critical role in the race to decide valid work.  Right now it functions as a microcosm of the greater bitcoin mining space, the share chain is like the main chain just valid work to the sharechain mostly counts to only the sharechain.  That should change.  The sharechain should be something on the side and not the main focus. It should be just for maintaining consensus, not deciding miners work.

Instead of what we do now, what if there was a window where valid work/shares could be accepted? This could allow for more than one tip of the chain to exist at a time, but as long as the gap between heights of the multiple chains isn't too large, we can keep consensus. There can be a rule so if the desync between the chains is too large, then less valid shares will reject until total consensus is restored.  I am not sure how to actually implement that, and will have to look into the practicality of it...  This could help with a higher hashrate miner saturating the share finding rate and wasting their work on just the sidechain.

This will make the issue of cognitive distortion potentially worse (or better), but reduce the amount of dead work from race conditions. There would have to be a system in place where instead of the tip of the chain being the confirmed head, the actual tip would be unknown and the actual head would be several shares behind the tip(s). So after (i.e.) 3 shares the head would be decided, but with multiple potential tips... The tips would exist concurrently, the head would lag beyond and assure total consensus.  So you would submit a share, and it wouldn't be considered completely valid until several following [miner] shares confirm it. The work you submitted would be counted from the tips found and then validated at the head.

Then on top of that, (if the POW difficulty is still way too high)... Instead of each payout being decided at time of a block found and accurred valid shares, for smaller miners, would it be possible to store what would be considered dust within the pool itself? If we set it up so each node keeps track of the smaller payouts in a multi sig wallet that is handled by the pool and acts as an escrow, this could allow for a smaller miner to submit work that would be normally be too minimal, and have it incur to a payout that would be more than a dust transaction.
In addition to storing just the dust payouts, since a payout isn't totally decided at the time of last share, the main payouts could be stored within this multi-node/sig wallet in case there isn't clear consensus [within the pool] following the solving of a bitcoin block.

I don't even know if this is all possible, just some thoughts... please pull them apart.
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
Let's say we abandon the idea of a share chain altogether.  How can we decentralize and make trust-less the work a miner has done and the payments the miner is owed upon block find for that work?

Interesting thought, were you thinking something like a weighted share pool where shares are accepted by the pool by consensus and then expire based on weight?

How could you trust that each node accepts shares it should without a chain?
Exactly my line of thinking, and you've raised exactly the question that has stumped me... how to ensure that each/every node knows of the work of miners on other nodes.  Using PoW blockchain technology like the share chain is the obvious answer, and it's what forrestv did in his work.  Unfortunately, the solution just isn't scalable using standard PoW blockchain tech.  Maybe some other kind of proof?  For example, PoS coins certainly don't rely upon massive hash rates to support them or their blockchains.  I'm not sure a PoS approach would work, it was just an example... but could we devise some kind of proofing algorithm that would allow work to be broadcast and shared across nodes?
legendary
Activity: 1258
Merit: 1027
Let's say we abandon the idea of a share chain altogether.  How can we decentralize and make trust-less the work a miner has done and the payments the miner is owed upon block find for that work?

Interesting thought, were you thinking something like a weighted share pool where shares are accepted by the pool by consensus and then expire based on weight?

How could you trust that each node accepts shares it should without a chain?
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
I don't know that concentrating on how to accumulate and distribute the smaller "dust" payouts is the most critical item to address.  I agree it's an issue that would arise; however, it's only going to rear it's head once the variance problem is solved.

Right now, the minimum share value in p2pool is higher than the dust threshold.  There's even code in p2pool as it is currently written to ensure you get more than a dust payout:

Code:
if expected_payout_per_block < self.node.net.PARENT.DUST_THRESHOLD:
    desired_share_target = min(desired_share_target,
        bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy)
                    )

It's certainly an interesting paradox.  Effectively we are saying the share difficulty must be lowered so that smaller miners do not feel the effects of p2pool's inherent variance.  However, the code also has to say that if you're going to receive a dust payout, increase your share difficulty so you don't - which means you're right back at the high share difficulty causing the variance for the small miners.

Agreed, but say in a re-write you could increase efficiency 10x, and 10x the # of shares in the chain (that may be optimistic), now you have a chain that can support many more smaller miners, but need a method to address the dust payouts...

Edit: The only way I know to fix the variance issue is to grow global p2pool hashrate.
Increasing global pool hash rate serves to help mitigate variance at a pool level, not a miner level.  Yes, we'd expect the pool to find blocks more frequently, but now we're faced with the higher and higher share difficulty because the hash rate has increased.  As you stated, even if we could become 10x more efficient - which would basically mean going from 30 second share time to 3 second share time - this is only a bandaid.  And, at 3 seconds a share, you're going to start running into a ton of orphans and rejects as latency between miners/nodes/network plays a more significant role.

Let's say we abandon the idea of a share chain altogether.  How can we decentralize and make trust-less the work a miner has done and the payments the miner is owed upon block find for that work?
legendary
Activity: 1258
Merit: 1027
I don't know that concentrating on how to accumulate and distribute the smaller "dust" payouts is the most critical item to address.  I agree it's an issue that would arise; however, it's only going to rear it's head once the variance problem is solved.

Right now, the minimum share value in p2pool is higher than the dust threshold.  There's even code in p2pool as it is currently written to ensure you get more than a dust payout:

Code:
if expected_payout_per_block < self.node.net.PARENT.DUST_THRESHOLD:
    desired_share_target = min(desired_share_target,
        bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy)
                    )

It's certainly an interesting paradox.  Effectively we are saying the share difficulty must be lowered so that smaller miners do not feel the effects of p2pool's inherent variance.  However, the code also has to say that if you're going to receive a dust payout, increase your share difficulty so you don't - which means you're right back at the high share difficulty causing the variance for the small miners.

Agreed, but say in a re-write you could increase efficiency 10x, and 10x the # of shares in the chain (that may be optimistic), now you have a chain that can support many more smaller miners, but need a method to address the dust payouts...

Edit: The only way I know to fix the variance issue is to grow global p2pool hashrate.
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
I don't know that concentrating on how to accumulate and distribute the smaller "dust" payouts is the most critical item to address.  I agree it's an issue that would arise; however, it's only going to rear it's head once the variance problem is solved.

Right now, the minimum share value in p2pool is higher than the dust threshold.  There's even code in p2pool as it is currently written to ensure you get more than a dust payout:

Code:
if expected_payout_per_block < self.node.net.PARENT.DUST_THRESHOLD:
    desired_share_target = min(desired_share_target,
        bitcoin_data.average_attempts_to_target((bitcoin_data.target_to_average_attempts(self.node.bitcoind_work.value['bits'].target)*self.node.net.SPREAD)*self.node.net.PARENT.DUST_THRESHOLD/block_subsidy)
                    )

It's certainly an interesting paradox.  Effectively we are saying the share difficulty must be lowered so that smaller miners do not feel the effects of p2pool's inherent variance.  However, the code also has to say that if you're going to receive a dust payout, increase your share difficulty so you don't - which means you're right back at the high share difficulty causing the variance for the small miners.
legendary
Activity: 1258
Merit: 1027
Just translating p2pool as it exists into C, C++, C#, Objective C, Swift, Java, Perl, Lisp, whatever, is not going to suddenly bring the mining masses here.  I know a number of us have listed out the problems previously, and a number of us have had discussions on potential solutions to those problems; however, nobody has been able to successfully crack the nut.  Maybe the problem is that we're all trying to solve p2pool's problems while keeping ourselves in the context of the existing structure.  Let's try another approach.  Forget p2pool even exists.

We as a community want a new pool to be created.  Here is a list of high-level bullet points we would want to have in our new pool:

  • Decentralized
  • Easy to use
  • Performant
  • Scalable
  • Inclusive

Please feel free to add to these points and/or provide more details/solutions.

This is exactly what I'm suggesting JB - a complete re-write. But in order to achieve this I think we should do the donation bounty option from within the current p2pool structure, as it's (I think) an excellent way of raising awareness in the community as well as raising funds, while the amount of work involved in making the changes are minimal & simple.

Nothing happening with this then I presume?  Sad

Stagnation is a killer.

I've been continuing to noodle on it.

Was at the MIT BTC expo this past weekend and was able to bring up the issues with some pretty smart folks.

The consensus seemed to lean toward addressing the payment threshold and scheme as the key issue, the rest can be addressed by a rewrite in a language like C++.

The challenge is to come up with a method to accumulate payouts for smaller miners within the pool, in a decentralized and trust-less way, so that they can be paid out at some threshold above what is considered dust.

The often proposed solution was to centralize the dust payments and payout from a trusted party. This is essentially what Nasty Pool has done, and its a great service for smaller miners, however the fact that it requires trust would take away a lot of what P2Pool has to offer, and I don't consider it a long term solution.

I'm in touch with a couple of the folks I met, including a dev who has the chops to pull it off (with an attractive bounty), however he is just as stuck as we are until we can come up with a solution for accumulating smaller payouts in a trust-less way...



legendary
Activity: 1232
Merit: 1000
2 in a row, good luck streak is back ?

if you think it's coming back then do join my pool *click on sig below*

Looks like 3 blocks ....wish that had happened when I had rented rigs a couple weeks ago.
hero member
Activity: 924
Merit: 1000
Watch out for the "Neg-Rep-Dogie-Police".....
Just translating p2pool as it exists into C, C++, C#, Objective C, Swift, Java, Perl, Lisp, whatever, is not going to suddenly bring the mining masses here.  I know a number of us have listed out the problems previously, and a number of us have had discussions on potential solutions to those problems; however, nobody has been able to successfully crack the nut.  Maybe the problem is that we're all trying to solve p2pool's problems while keeping ourselves in the context of the existing structure.  Let's try another approach.  Forget p2pool even exists.

We as a community want a new pool to be created.  Here is a list of high-level bullet points we would want to have in our new pool:

  • Decentralized
  • Easy to use
  • Performant
  • Scalable
  • Inclusive

Please feel free to add to these points and/or provide more details/solutions.

This is exactly what I'm suggesting JB - a complete re-write. But in order to achieve this I think we should do the donation bounty option from within the current p2pool structure, as it's (I think) an excellent way of raising awareness in the community as well as raising funds, while the amount of work involved in making the changes are minimal & simple.

Nothing happening with this then I presume?  Sad

Stagnation is a killer.
legendary
Activity: 1500
Merit: 1002
Mine Mine Mine
2 in a row, good luck streak is back ?

if you think it's coming back then do join my pool *click on sig below*
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C

It's called Cognitive Distortion


 Cheesy Cheesy Cheesy  Great description  Wink
member
Activity: 76
Merit: 10
Just my mind dump:

In my mind, p2pool needs to see a block every two - three days to stay alive...  Beyond three days work is completely discarded.

I'm a noob, but correct me if I'm wrong.   There's a finite number of blocks discovered, and if P2Pool doesn't have the hash they will be solved by other pools... so "luck" is disproportionately lost as the hash goes down. 

Slush is running at 10-11PH, and yesterday with over 200% luck it found 8 blocks.  100% means 4 blocks a day at 10 PH/s.

so if we're running at 2.5PH/s at 100% luck.. 1 per day might be expected by may not be realistic as larger pools could steal them away before they are solved, no?

Watching rented hash or home grown hash fall off the share list after 3 days is a deal killer.   P2Pool will not continue to retain hash if multiple 4 & 5 day intervals occur on a regular basis.

Seems like P2Pool needs 2-3PH/s minimum  to be viable with the current software state and difficulty.

When I log in and see 1-1.5PH/s I think the 7 day delays are going to continue popping up.

Can someone tell me if my thinking is screwed up?

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
This demonstrates the scalability problem perfectly, hash rate drops & blocks are found, hash rate rises & blocks go scarce again. The higher the network hash rate gets, the worse the problem becomes apparent. This is one of the reasons why p2pool will never grow beyond a certain size until the scalability issue is fixed. It's a crying shame & I still miss it, but as others have said, it's simply pointless throwing more hash at p2pool until a fix is found that cures the scalability & variance issues.

I hope it happens soon  Wink

Why does that happen? Why with higher hash rate are blocks more scarce? Is that actually what is happening or does it just seem that way?  How would we fix such an issue?
It's called Cognitive Distortion

Finding actual blocks are sent out by everyone who finds them even if p2pool rejects them.
So from a solo mining POV the blocks get sent out anyway.

The issues here are related to variance and performance for the share-chain and the bias it gives to larger miners and the problems with mining as a smaller miner.

Edit: though I will add ... defining the luck based on share-chain shares/hash rate makes it better than it really is since the pool by design loses almost 10% of them rather than like a typical pool that loses 1/20 to 1/50 of that (most 0.2% to 0.5%)
newbie
Activity: 58
Merit: 0
This demonstrates the scalability problem perfectly, hash rate drops & blocks are found, hash rate rises & blocks go scarce again. The higher the network hash rate gets, the worse the problem becomes apparent. This is one of the reasons why p2pool will never grow beyond a certain size until the scalability issue is fixed. It's a crying shame & I still miss it, but as others have said, it's simply pointless throwing more hash at p2pool until a fix is found that cures the scalability & variance issues.

I hope it happens soon  Wink

Why does that happen? Why with higher hash rate are blocks more scarce? Is that actually what is happening or does it just seem that way?  How would we fix such an issue?
Jump to: