Did Zed try doing just the opposite? Setting up his worker as BTCADDRESS/1 or something (that setting will force minimum difficulty shares, not actually diff 1)? Also, I know there was a p2pool fork that had difficulty determined by individual workers, not by the node itself. I forget who wrote it...
The whole point of the "/" parameter was to prevent exactly what's happening on your node... where one very large miner effectively shuts out small miners.
For example, looking at p2pool log data from when I last ran a node (yeah, I still have those old logs
):
2015-12-01 08:07:17.602682 New work for worker! Difficulty: 500.000000 Share difficulty: 1403930.065645 Total block value: 25.556069 BTC including 3236 transactions
The "Difficulty" comes from using the "+" parameter. The "Share difficulty" comes from using the "/" parameter. In the case above, I used "BTCADDRESS+500". Since I didn't use the "/", that worker's difficulty was calculated to be the share chain difficulty.
To determine the share difficulty, the code checks to see if you've passed in a value using the "/". If not, it'll determine it based your node's hash rate vs the total p2pool hash rate and compare that to the minimum share difficulty to get a share on the chain. If lower than the minimum, it'll use the minimum. Else, it'll use the value obtained from the comparison. However, if you do pass something in "/", it'll use that value (unless it's lower than the minimum share difficulty, in which case it'll use the minimum).
By doing these calculations, p2pool as a whole tries to prevent a single actor from flooding the share chain. However, by providing the ability to override it, I'm not sure how effective a strategy it really is. Hence the numerous debates we had in the past
. On the one hand, p2pool tries to ensure no one actor can adversely dominate the chain. It does this by increasing the share difficulty on the node to compensate. This works great if every miner runs his/her own node (which is truly how p2pool was envisioned to be utilized). However, on the other hand, it fails when you have multiple differently sized miners on a single node. Now the poor small guy gets to suffer some pretty nasty variance because the share difficulty is far larger than the hash rate would warrant.
The "official" solution was to offer the "/" parameter, so that all miners on a node could manually override the node's set difficulty. As I pointed out earlier, this means a non-scrupulous actor with a comparatively large hash can override to use a minimum share difficulty and flood the chain. Eventually, the entire p2pool network catches up, though and the overall minimum share difficulty is raised to match the new larger hash rate.
That last sentence is yet another area we've debated countless times, and exposes the largest flaw in the p2pool design: the more hash rate the pool gets, the more variance the miners suffer.
In a typical pool setup (like my pool), the more hash rate the pool gets, the less variance each miner sees. This makes miners happy because they get statistically closer to the expected daily payouts the online calculators show. in p2pool, the more hash rate, the higher the minimum share difficulty, and the fewer shares you'll have on the chain to be paid.
In an extreme example, imagine all the network was on p2pool. The current diff is 310,153,855,703.43, which translates into one block every 600 seconds. P2Pool strives to get one share every 30 seconds. Therefore, if every miner was on p2pool, the minimum share difficulty to get a share on the chain would be 15,507,692,785.1715. Imagine that. An S9 would take about an expected 55 days to even get a single share on the chain. Talk about variance! Using that same example, let's assume every single miner was on my pool. A block would be found every 600 seconds. The miner with the S9 would make a consistent 0.01135
BTC a day. Yes, I purposefully ignored luck factors in the two examples.