Thank you for your time to make a detailed response, as usual there is a lot to consider.
I want to add a few points.
We've actually hit the soft limit before, consistently for long periods and did not see the negative effects described there (beyond confirmation times for lower fee transactions going up, of course).
This is at odds with an earlier reply on this, which matched my recollection, especially regarding the 250KB soft-limit on March 6th, 2013:
Unfortunately over-eager increases of the soft-limit have denied us the opportunity to learn from experience under congestion and the motivation to create tools and optimize software to deal with congestion (fee-replacement, micropayment hubs, etc).
And this had no hope of being properly exercised as IIRC not all mining pools were on board, Eligius had a 500KB limit at the time, and a reasonable %age of the hashing power.
Look at the huge abundance of space wasting uncompressed keys (it requires ~ one line of code to compress a bitcoin pubkey) on the network to get an idea of how little pressure there exists to optimize use of the blockchain public-good right now.
Regarding efficiencies, your comment at the same time about compressed pubkeys deserves attention. Are you saying that new blocks in the blockchain could be easily made smaller with this compression? It seems a valuable benefit at this time.
Fortunately, the fee market arbitrates access neutrally; but it does that at arbitrary scale. Mike completely disregard this because he believes transactions should be free (which should be ringing bells for anyone thinking X MB blocks won't be constantly X MB; especially with all the companies being created to do things like file storage in the Bitcoin blockchain).
Then perhaps the provision of free transactions should be reviewed in the light of vanishing block space. A simple change might be doubling the necessary days-destroyed per BTC.
One of the mechanisms to make running against the the fee more tolerable which is simple to implement and easy for wallets to handle is replace-by-fee (in the boring, greater outputs mode; not talking about the scorched earth stuff)-- but thats something that Mike has vigorously opposed for some reason.
If Jeff is ok with RBF in the boring mode then that would be a good improvement when blockspace is under pressure. But I know he is rightly exercised over the RBF-SE which is yet another ideological debate in itself.
The particular issue there is that the reject messages are fundamentally unhelpful for this (though they're a nice example of another railroaded feature, one that introduced a remotely exploitable vulnerability). The issue is that just because node X accepted your transaction doesn't tell you that node Y, N hops away did or didn't, in particular it doesn't tell you if there is even a single miner anywhere in the network that rejected it-- what would you expect to avoid this? every node flooding a message for every transaction it rejects to every other node? (e.g. a rejection causing nodes^2 traffic???). Nodes due produce rejects today; but it's not anything about anyone's opinion that prevents a guarantee there, the nature of a distributed/decenteralized system does. The whole notion of a reject being useful here is an artifact of erroneously trying to shoehorn in a model from centralized client/server systems into Bitcoin which is fundamentally unlike that.
OK. It makes sense that reject messages do not fit into decentralized systems in a meaningful way.
The comments about the filling up memory stuff are grimly amusing to me in general for reasons I currently can't discuss in public (please feel free to ping in six months).
Sure thing. I would be keen to learn more about this at that time.
Overall, I think the article does a good job of suggesting that that the goal of the recent blocksize proposal is a far more wide spanning change than just increment the blocksize to make a necessary room, and that its also a move to make a substantial change to the original long term security model to an underspecified one which doesn't involve fees; a trajectory for an unlimited blocksize that processes any transactions that come in, at any cost; even if that means centralizing the whole network onto a single mega-node in order to accept the scale. Or at least that appears to be the only move that has an clear answer the case of 'there will be doom if the users make too many transactions' (The answer being that the datacenter adds more storage containers full of equipment).
Well. The goal of my interest in the block-size proposal is to see a sustained decay in confirmation times avoided, which would otherwise cause negative user experiences, of a great many users, negative publicity, and tarnishing Bitcoin in the mind of the future users who may then decide not to try it out. The whole of these together would probably reduce full node numbers even faster than larger blocks.