…
At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.
I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way. What if, instead, we just make it
very easy for node operators to adjust
their block size limit. Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ." What would happen?
Personally, I'd just select some big block size limit, like 32 MB. This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes. I'd expect many people to do the same thing. Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size. We all witness that indeed that block got included into the longest proof of work chain, and then suddenly
all miners are confident producing 1.1 MB blocks. Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.
TL/DR: maybe we don't need a strict definition for the max block size limit.