Author

Topic: [1500 TH] p2pool: Decentralized, DoS-resistant, Hop-Proof pool - page 630. (Read 2592012 times)

hero member
Activity: 516
Merit: 643
Because of course no one uses a remote p2pool node for a failover backup. /sarcasm

Sounds like no real testing has been done to verify it will not be an issue.

I'm starting to get the feeling that you are sticking your  head in the sand and hoping that everything will just work.

It's kind of hard to test with something that you're not even sure exists... Tongue

Anyway, with sane nTime rolling and adaptive pseudoshare targets, getworks can provide more than enough work to remote hosts with very low bandwidth. Everything for nTime rolling is already there (the HTTP header), and adaptive targets are disabled temporarily, but I'll re-enable them for the next release.

How hard would it be to implement GBT for pyraminingP2Pool? Seems like that would mitigate a few issues for remote miners anyway.

While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise).

Edit: Pyramining is already setup how they want to and will probably continue in the same. GBT support for p2pool was what I meant to ask about.

I really don't see any issue with getwork. Why is GBT necessary?

P2Pool can handle reward halving.

EDIT: Getwork does have some problems with timestamp rolling. Depending on how ASIC miners handle it, it could work, but I'll start working on GBT support to allow timestamp rolling.
legendary
Activity: 1036
Merit: 1000
DARKNETMARKETS.COM
Anyone running Ztex USB-FPGA on p2pool for long time? Can you please post your stats, efficiency, and stales (orphans/DOA)?
sr. member
Activity: 389
Merit: 250
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.

Really? P2Pool uses ~3 GB/month, normally. How much is too much for you?

With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage.

If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily.
How hard would it be to implement GBT for pyramining? Seems like that would mitigate a few issues for remote miners anyway.

While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise)
Depending on the number of outstanding transactions and what transactions AREN'T being ignored, it is possible for GBT to use more bandwidth than GetWork ...
Whoops, I misspoke, corrected the original post. I meant to ask how hard it would be to setup support for GBT (or stratum, or anything else) in p2pool, as this would be one way to improve remote mining.
sr. member
Activity: 383
Merit: 250
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining.

Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...

Because of course no one uses a remote p2pool node for a failover backup. /sarcasm

Sounds like no real testing has been done to verify it will not be an issue.

I'm starting to get the feeling that you are sticking your  head in the sand and hoping that everything will just work.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.

Really? P2Pool uses ~3 GB/month, normally. How much is too much for you?

With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage.

If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily.
How hard would it be to implement GBT for pyramining? Seems like that would mitigate a few issues for remote miners anyway.

While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise)
Depending on the number of outstanding transactions and what transactions AREN'T being ignored, it is possible for GBT to use more bandwidth than GetWork ...
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs.

And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Even the people buying coffee warmers for $150 are serious miners?
sr. member
Activity: 389
Merit: 250
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.

Really? P2Pool uses ~3 GB/month, normally. How much is too much for you?

With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage.

If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily.
How hard would it be to implement GBT for pyraminingP2Pool? Seems like that would mitigate a few issues for remote miners anyway.

While I'm asking, is the P2Pool codebase setup for the block reward halving already? (I'd imagine so, but it gets messy otherwise).

Edit: Pyramining is already setup how they want to and will probably continue in the same. GBT support for p2pool was what I meant to ask about.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining.

Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...

As Con writes, soon the only profitable miners with be ASIC based. The proportion of the network hashrate of an ASIC will be significantly lower than it would be now since the overall network hashrate will have increased significantly. Will ASICS still be a problem for p2Pool in this case?
hero member
Activity: 504
Merit: 500
Scattering my bits around the net since 1980
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.
Well... a while ago, pyramining briefly switched over to my public node while they were working on some stuff.

I ended up having between 200-400GH/s at the time, and everything seemed to go just fine with my bandwidth, so p2pmining should be ok with the added bandwidth of the ASIC traffic.

If p2pmining would let you use higher diff shares, or maybe they can set up a high-hash node too. They're pretty much doing their own sub-share-chain, so maybe they will do something to accommodate the ASICs.

Not sure if they would tho. It sounded like they came into existence mainly to cater to the smaller miner who needed lower-diff shares.

-- Smoov

ps: Krak is in one of the outlying cities/towns that don't have a lot of infastructure to spread around. Rural-ish kind of area. That's where his main bandwidth bottleneck is. They get priced accordingly. Too much demand, not enough supply. So they get capped more. Sad
hero member
Activity: 591
Merit: 500
high bandwidth? WTF?
~250MB a day adds up quick when you have a 150GB cap. My DSL connection also freezes up a lot when I start a download which is made much worse when I'm running p2pool. I limited my p2pool incoming connections to 5 and my Bitcoin-qt connections are limited to 10 and it's still noticeable.
hero member
Activity: 516
Merit: 643
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.

Really? P2Pool uses ~3 GB/month, normally. How much is too much for you?

With a normal pool, one getwork response/submit takes about 800 bytes of data, plus some HTTP stuff, so about 1KB for every 4 GH (2^32). At 1 GH/s, over a month, that's 650 MB (1 * (24*60*60*30) * (1000/4) / 1e6). It's probably a bit higher than that due to miners requesting more work than they need. In addition, that number scales linearly with the amount of hashing power you have, so with 3 GH/s, you're on par with P2Pool's bandwidth usage.

If I'd heard more complaints about bandwidth, I could have added an option to decrease the number of peers. Bandwidth usage is proportional to the number of peers you have, and 5 is probably enough, so you could halve the usage easily.
legendary
Activity: 1792
Merit: 1008
/dev/null
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.
high bandwidth? WTF?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs.

And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Ok... so no one will be running p2pool via a remote node?
hero member
Activity: 591
Merit: 500
And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
Some people, such as myself, can't really afford the high bandwidth usage of running a node. That's the main reason I switched pools.
hero member
Activity: 516
Merit: 643
Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs.

And anyone (everyone?) with an ASIC, having invested hundreds of dollars, will be a "serious miner" in my eyes...
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining.

Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
...With ASICs being 50x more efficient, the only people mining in the future, bar a very few exceptions, will all be running ASICs.
hero member
Activity: 516
Merit: 643
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining.

Any serious P2Pool user, one serious enough to have ASICs, should definitely have their own P2Pool node...
hero member
Activity: 591
Merit: 500
I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
What about mining on a remote node? It seems like ASICs could kill P2P Mining.
hero member
Activity: 516
Merit: 643
What is the plan for ASIC's? Will it require a switchover as well? If it requires a switchover, why not do it now instead of waiting until ASIC's are already out and about?

I don't think ASICs will need any special support. P2Pool can provide getwork results fast enough for hundreds of GH/s (from a normal computer) and could be optimized for more. In addition, any timestamp rolling multiplies that.
sr. member
Activity: 383
Merit: 250
https://github.com/forrestv/p2pool/commit/38d9d89a88086abe219a270f43e940f074066cd7 :
Quote
decremented desired_version to 7 to prevent switchover for now

Does anyone have an explanation for this? Huh
Ye, I was surprised by growing v7 from thin air lol. Why we want to delay switch to v8? Forrestv found some  bad buggys?

Yes, there's a bug that Smoovius reported on the Litecoin P2Pool, which would also apply to the Bitcoin one. P2Pool's memory usage can grow without bound after a fork like this if you maintain connections to people mining on both forks. P2Pool is unable to forget about the children of the root node that connects the two forks, due to that operation simply not being implemented. (I assumed that one fork would always lose eventually, resulting in a linear chain whose last node can be forgotten about.)

Anyway, it's fixed in git, and after some more testing, I'll release v9 and the switchover will be triggered when v9 becomes popular. Restarting P2Pool after the switch would have been enough to fix this, but many people don't have something auto-restarting P2Pool if it crashes or even check up on it often.

The switchover happened on the Litecoin P2Pool about a week ago, and went flawlessly other than this issue.

Well thanks to Smoovius for finding it and to you for fixing it.

What is the plan for ASIC's? Will it require a switchover as well? If it requires a switchover, why not do it now instead of waiting until ASIC's are already out and about?
Jump to: