My forked sharechain has switched to v32 shares. When I did this, I forgot to upgrade one of my nodes, which resulted in that node downloading shares from my upgraded nodes, rejecting them due to an unknown share version, and then downloading them. Eventually, it disconnects from the peers due to the perceived misbehavior, but since I had the connections hardcoded with an -n command line option, it reconnected again immediately. It seems p2pool could use a temporary ban feature for misbehaving peers as well as a blacklist for share hashes that are known to be invalid according to the node's ruleset. That should be done before p2pool gets too big in order to prevent DoS attacks.
I just bumped the protocol version up to 3200 and now require peers to have that protocol version. I think we're now ready to start testing with other people's nodes. I'd like to set up a chain of nodes connected via the internet at large (each with one incoming, one outgoing peer) so that I can get an idea how long full shares will take to propagate over many hops. Any volunteers? A chain of 5 or so nodes would be sufficient. See the bottom for instructions.
After this test has been finished, I can add some new DNS seed peers to the code to make it easier for people to join the fork. (Currently, you have to use the -n command line option to manually specify someone to connect to whom you know to be using my code.)
By the way, some rough performance numbers I've observed on my LAN using pypy and fast CPUs: it appears to take around 200 ms per hop to transmit and parse a 1MB share, even when the share is being sent to localhost (i.e. infinite bandwidth). It then takes about 80-120 ms per worker (i.e. per mining address, not per mining rig) to generate and assign new work. After new work has been assigned, it seems to take Antminer S9s around 300 ms before they stop working on the old work and switch to new work (or maybe that's just additional latency on the node side from serialized processing of network input causing delays in the processing of returned work). All told, this seems to add around 600 ms of latency. Over the last week of operation, the forked pool is showing total orphan and DOA rates of 0.59% and 1.2%, respectively, or about 1.8% total, which is pretty much equal to what we'd expect based on the observed latency (0.6 s / 30 s = 2%). This indicates that the changes I made to address the "Block-stale share punished!" and "Tried to broadcast a share without all of its transactions" issues appear to be resolved in this code. The old fork probably would get a DOA+orphan rate above 10%, and certainly no less than 5%, even on a LAN. Most of that high orphan rate was purely random and shouldn't adversely affect revenue, but it could still create unnecessary orphan races which benefit larger miners like me.
It will be interesting to see how high DOA+orphan rates end up being once this fork has more mining nodes. I'm guessing we'll probably be a little under 10%, but maybe 5% is possible. We'll see.
Due to the share chain holding about 2x as many transactions per share, the memory consumption is about 2x higher. Expect about 1.5 GB with CPython and close to 4 GB with pypy. I'm sure this can be improved if it's a major problem for people, but it's not the top of my priority list. (Do we even need all the transactions for old shares? Wouldn't the merkle root + coinbase be enough for shares that are too old to have reused transactions?)
If you want to contribute to the networked chain for share propagation testing, please clone
https://github.com/jtoomim/p2pool/commits/1mb_hardforked, then configure your node to point to the previous poster's node (e.g. python run_p2pool.py -n previous_node), then list your node IP, what it's connected to, what type and speed of CPU you're using, how fat your pipe is, and whether you're using pypy or CPython (the regular python). Example, and to start us off:
Node IP: ml.toom.im (default ports)
Connected to: ml.toom.im:9334 and :9336 (or :9335 and :9337 for p2p ports)
To connect to me: run_p2pool.py -n ml.toom.im:9333
CPU: Core i7 4790k 4.0 GHz
Pipe: 100m/100m fiber
Using: pypy 2.4.0