Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 276. (Read 2591928 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Interesting. So, if I were using 10+ ants on my node - do you think I'd be better off using your ckproxy?

Great work again ck - nice one  Smiley
In terms of keeping your local p2pool client running as low overhead as possible, combining miners through the proxy helps. In terms of (possibly, assuming my interpretation is right) helping minimise p2pool's variance for small miners to keep them on board, it would only come into effect if your hashrate is > 5% of the overall pool hashrate.
legendary
Activity: 1540
Merit: 1001
I thought this "high share difficulty for one miner" logic only applied to the one node the miner is on?  Alt share chain difficulty is still alt chain share difficulty.
Well you tell me since I'm new to the p2pool code. Is alt share chain difficulty based on trying to keep the number of shares contributed to altchain constant or is it based on overall hashrate of the pool? I'd have to dig into the code to figure it out and python makes me nauseous.

Local pseudo share difficulty is based upon hash rate on that node.

Alt share chain difficulty is based upon the entire pool hash rate. 

And of course, the pool hash rate isn't really known.  It's surmised based upon the number of shares found in the alt chain over a fixed period of time (I don't know what that value is).  The target is one share every 30 seconds pool wide.
Ah but do you see how your answer means the latter then and not the overall hashrate? This huge miner is contributing only one share every half hour to the entire p2pool chain, which is the same amount of shares a miner 1/10th the size contributes.

So you mean the share difficulty for the worker is 10x alt chain difficulty, ie, right now about 86million instead of 8.6 million?

If that's true, then yes, it should work as you said, not adversely affect the alt share difficulty.

M
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I thought this "high share difficulty for one miner" logic only applied to the one node the miner is on?  Alt share chain difficulty is still alt chain share difficulty.
Well you tell me since I'm new to the p2pool code. Is alt share chain difficulty based on trying to keep the number of shares contributed to altchain constant or is it based on overall hashrate of the pool? I'd have to dig into the code to figure it out and python makes me nauseous.

Local pseudo share difficulty is based upon hash rate on that node.

Alt share chain difficulty is based upon the entire pool hash rate. 

And of course, the pool hash rate isn't really known.  It's surmised based upon the number of shares found in the alt chain over a fixed period of time (I don't know what that value is).  The target is one share every 30 seconds pool wide.
Ah but do you see how your answer means the latter then and not the overall hashrate? This huge miner is contributing only one share every half hour to the entire p2pool chain, which is the same amount of shares a miner 1/10th the size contributes.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
Well, if you believe the BitmainWarranty account, and can stomach the screenshot where they "tested" for only a minute, then the S4 works with p2pool: https://bitcointalksearch.org/topic/m.9075999.

This. It's hardly an example of how it performs with p2pool is it.....& who is this user BitmainWarranty? I'll wait until I see some solid, hard evidence before making my mind up I think, although I've seen enough complaints about the S4's to pretty much come to a conclusion already.....

Bitmain are going to point an S4 at my node for 10 minutes shortly apparently, so we'll see....... Wink
legendary
Activity: 1540
Merit: 1001
I thought this "high share difficulty for one miner" logic only applied to the one node the miner is on?  Alt share chain difficulty is still alt chain share difficulty.
Well you tell me since I'm new to the p2pool code. Is alt share chain difficulty based on trying to keep the number of shares contributed to altchain constant or is it based on overall hashrate of the pool? I'd have to dig into the code to figure it out and python makes me nauseous.

Local pseudo share difficulty is based upon hash rate on that node.

Alt share chain difficulty is based upon the entire pool hash rate. 

And of course, the pool hash rate isn't really known.  It's surmised based upon the number of shares found in the alt chain over a fixed period of time (I don't know what that value is).  The target is one share every 30 seconds pool wide.

M
legendary
Activity: 916
Merit: 1003
I'd have to dig into the code to figure it out and python makes me nauseous.

Me too until recently but Python is simply too big to ignore.  I needed on my resume to stay marketable so I held my nose and dove in.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
...python makes me nauseous.

Re-write it in C then!!  Smiley Wink
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I thought this "high share difficulty for one miner" logic only applied to the one node the miner is on?  Alt share chain difficulty is still alt chain share difficulty.
Well you tell me since I'm new to the p2pool code. Is alt share chain difficulty based on trying to keep the number of shares contributed to altchain constant or is it based on overall hashrate of the pool? I'd have to dig into the code to figure it out and python makes me nauseous.
legendary
Activity: 1540
Merit: 1001
I've set up a private pool solution for a reasonably large miner using a combination of p2pool and ckpool technology. You should all see a decent increase in the overall pool size over the next 24-48 hours.
This hasher is now online. His hashrate should be obvious, right at the top of the list. Barring changes in plans, and provided the hardware continues to hash well, it should be remaining on this pool.

Now the interesting thing with this is, because I have connected the hardware via ckproxy instead of as 100 connections directly to the p2pool client, p2pool sees it as one client, which means that this miner's share target is more than 10 times larger than that for other miners. By doing this, even though I've dumped a large hashrate onto the pool, it won't substantially increase the target share rate for the smaller miners. This means smaller miners can benefit from the increased p2pool hashrate decreasing their variance without their share target increasing that much which normally increases their variance the same amount. If more larger miners did something similar on p2pool, it might keep the smaller miners. The large miner benefits from his p2pool client scaling where it otherwise couldn't and the smaller miners get to stay and benefit from his presence. While it's not a "fix" for the overall design, it might give p2pool some breathing space, allowing ever larger miners to join. That said, "small" these days is not really that small... Perhaps p2pool will actually end up being nothing but big miners (though that is what most of the network is now), provided their hardware is compatible :p

I thought this "high share difficulty for one miner" logic only applied to the one node the miner is on?  Alt share chain difficulty is still alt chain share difficulty.

M
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
I've set up a private pool solution for a reasonably large miner using a combination of p2pool and ckpool technology. You should all see a decent increase in the overall pool size over the next 24-48 hours.
This hasher is now online. His hashrate should be obvious, right at the top of the list. Barring changes in plans, and provided the hardware continues to hash well, it should be remaining on this pool.

Now the interesting thing with this is, because I have connected the hardware via ckproxy instead of as 100 connections directly to the p2pool client, p2pool sees it as one client, which means that this miner's share target is more than 10 times larger than that for other miners. By doing this, even though I've dumped a large hashrate onto the pool, it won't substantially increase the target share rate for the smaller miners. This means smaller miners can benefit from the increased p2pool hashrate decreasing their variance without their share target increasing that much which normally increases their variance the same amount. If more larger miners did something similar on p2pool, it might keep the smaller miners. The large miner benefits from his p2pool client scaling where it otherwise couldn't and the smaller miners get to stay and benefit from his presence. While it's not a "fix" for the overall design, it might give p2pool some breathing space, allowing ever larger miners to join. That said, "small" these days is not really that small... Perhaps p2pool will actually end up being nothing but big miners (though that is what most of the network is now), provided their hardware is compatible :p

Interesting. So, if I were using 10+ ants on my node - do you think I'd be better off using your ckproxy?

Great work again ck - nice one  Smiley
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I've set up a private pool solution for a reasonably large miner using a combination of p2pool and ckpool technology. You should all see a decent increase in the overall pool size over the next 24-48 hours.
This hasher is now online. His hashrate should be obvious, right at the top of the list. Barring changes in plans, and provided the hardware continues to hash well, it should be remaining on this pool.

Now the interesting thing with this is, because I have connected the hardware via ckproxy instead of as 100 connections directly to the p2pool client, p2pool sees it as one client, which means that this miner's share target is more than 10 times larger than that for other miners. By doing this, even though I've dumped a large hashrate onto the pool, it won't substantially increase the target share rate for the smaller miners. This means smaller miners can benefit from the increased p2pool hashrate decreasing their variance without their share target increasing that much which normally increases their variance the same amount. If more larger miners did something similar on p2pool, it might keep the smaller miners. The large miner benefits from his p2pool client scaling where it otherwise couldn't and the smaller miners get to stay and benefit from his presence. While it's not a "fix" for the overall design, it might give p2pool some breathing space, allowing ever larger miners to join. That said, "small" these days is not really that small... Perhaps p2pool will actually end up being nothing but big miners (though that is what most of the network is now), provided their hardware is compatible :p
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
Well, if you believe the BitmainWarranty account, and can stomach the screenshot where they "tested" for only a minute, then the S4 works with p2pool: https://bitcointalksearch.org/topic/m.9075999.

This. It's hardly an example of how it performs with p2pool is it.....& who is this user BitmainWarranty? I'll wait until I see some solid, hard evidence before making my mind up I think, although I've seen enough complaints about the S4's to pretty much come to a conclusion already.....

It's getting hard to find hardware that will work properly with p2pool nowadays, even 2nd hand gear, especially now that Bitmain are holding back on their next S3 batch release in an effort to sell more of their S4 train wrecks - I just hope they get the firmware fixed soon, but it ain't looking promising.

In a perfect world, someone will come along with a complete re-write of p2pool soon......... Tongue
Yeah, the S4 really is a train wreck.  PSUs burning out, boards with "x" all over, firmware that doesn't work, firmware updates that break and require you to remove the SD card.

As for hardware, Spondoolies gear still plays well with p2pool - every one of their SPx miners.  The S3s do just fine.  That's really about it at this point.
hero member
Activity: 686
Merit: 500
WANTED: Active dev to fix & re-write p2pool in C
Well, if you believe the BitmainWarranty account, and can stomach the screenshot where they "tested" for only a minute, then the S4 works with p2pool: https://bitcointalksearch.org/topic/m.9075999.

This. It's hardly an example of how it performs with p2pool is it.....& who is this user BitmainWarranty? I'll wait until I see some solid, hard evidence before making my mind up I think, although I've seen enough complaints about the S4's to pretty much come to a conclusion already.....

It's getting hard to find hardware that will work properly with p2pool nowadays, even 2nd hand gear, especially now that Bitmain are holding back on their next S3 batch release in an effort to sell more of their S4 train wrecks - I just hope they get the firmware fixed soon, but it ain't looking promising.

In a perfect world, someone will come along with a complete re-write of p2pool soon......... Tongue
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
Hello, Why not support script Dogekoin?
It does, there's just problem with change to difficulties recently.  Move them back and you should be fine.
legendary
Activity: 1270
Merit: 1000
Hello, Why not support script Dogekoin?

Dogecoin is supported and there are many p2pools listed at http://p2pools.org/doge

You did not understand me, I mean the script source
  https://github.com/Rav3nPL/p2pool-rav

Truten, I do not believe this thread is the right place to discuss altcoin p2pools as it is focused on Bitcoin.

Having said that I believe Rav removed dogecoin from his repository since he believes running a dedicated dogecoin pool is no longer a good idea due to doge now supporting auxPOW.

Added to that it appears that a number of nodes never updated their dogecoind so the dogecoin p2pool is finding blocks in both chains.

For the most part I would say that the dogecoin p2pool is dead.
newbie
Activity: 7
Merit: 0
Hello, Why not support script Dogekoin?

Dogecoin is supported and there are many p2pools listed at http://p2pools.org/doge

You did not understand me, I mean the script source
  https://github.com/Rav3nPL/p2pool-rav
member
Activity: 78
Merit: 10
Hello, Why not support script Dogekoin?

Dogecoin is supported and there are many p2pools listed at http://p2pools.org/doge


legendary
Activity: 1792
Merit: 1008
/dev/null
Hello, Why not support script Dogekoin?
because your a moron...

seriously, just take a p2pool with moronKoin in it...
newbie
Activity: 7
Merit: 0
Hello, Why not support script Dogekoin?
legendary
Activity: 1344
Merit: 1024
Mine at Jonny's Pool
Well, if you believe the BitmainWarranty account, and can stomach the screenshot where they "tested" for only a minute, then the S4 works with p2pool: https://bitcointalksearch.org/topic/m.9075999.

By the way, Bitmain released a new firmware for the S4 in which they updated cgminer to 4.6.1: https://www.bitmaintech.com/support.htm?pid=00720140930114518599JXGHWWD80660.

EDIT: I do not own an S4, so I can't confirm Bitmain's statements.
Jump to: