Surely it will be almost trivially simple to start a p2paltpool, a p2paltpool2 and so on from time to time as needed to keep each p2pool network within reasonably nice share-difficulty range?
Right. I think a p2pool size of 200-300 GH/s is about right. On average that's 3-4 blocks found per day at the current Bitcoin difficulty level. That's smooth enough variance for someone with a single GPU. If the current p2pool starts reaching 400 GH/s, we should plan on splitting it in half.
I prefer a pool with minimal of 600GHash at the current difficulty level... So, 600-800GHash would be a good point to split...
Sometimes with 200-300GHash we spend almost 24H without earning nothing...
This is why I think the optimal solution would be p2pool forming a backbone network to connect pools, large farms, and sub-p2pools.
Then any number of sub-p2pools could form. Some with lower difficulty (for smaller miners), some with higher difficulty/hash rate for larger miners.
We just need to implement it! LOL
I think it will happen organically. Trying to do it now would likely just cause frustration and confusion. As p2pool grows difficulty will rise. Eventually it will reach a point that a solution will need to be found because difficulty is too high for small miners.
At that point the simplistic aproach would be to make a second instance of p2pool. Maybe we will do that but that limits us forever to 300-500GH/s per instance and we never make it into that low variance that happens at 1TH/s+.
Instead once p2pool is larger we "could" update p2pool to have longer share time (say 60 seconds vs 10 second target). Which will raise the high difficulty even higher. From p2pool point of view pools, mining farms, and subpools are simply shares being submitted. Nothing really needs to change except a longer share time.
Major hashing farms and conventional pools can continue to mine the very high efficiency p2pool (which has now formed the decentralized backbone). I am trying to convince the operator of BitMinter to join p2pool. That would add ~120GH/s of hashing power. If it can be done I would imagine many other smaller pools would to reduce their variance.
As long as a protocol for "sub-p2pool" is implemented there will be no/minimal change to miners. One or more "subp2pool" instance can be started for smaller miners who need the lower variance of lower difficulty shares. Hopefully this weekend I can write a whitepaper which fleshes out the protocol concept to show that a migration from a single p2pool to a backbone & sub-pool network would be that hard. At this point is is more academic. There is no need now what we need is more hashing power not fragmenting it. It is more something to keep in mind for the "what happens if we get to 1 TH/s?".