If I were running the show I'd rush right over to the Blockstream guys and try to form an alliance which fosters sidechains and hope they didn't slam the door in my face. I'd also abandon the notion, and as temporarily as possible, that there is not an 'impure' expedient means of dealing with eventualities and would take a chance that openness and credibility would instill enough confidence to carry the effort through (to a clearly dominant position.) My ideal Bitcoin would anticipate days-long confirmations which would be enough time to deploy almost any sort of patch needed.
MP's going to do what he's going to do, but I'd be highly inclined to have the pogo's be free (in the software sense) and open. Part of it would be principle, but another part would be directed toward the effort of gathering confidence. Mostly it would be because it's awfully difficult to release bug-free code, and particularly when it is complex (if relatively well tested) and under attack by smart people. I might be inclined to install a kill fuse of some sort which would be (to me) a legitimate demand for the guy paying the bills. Something like that the device has to ping mothership occasionally and will melt down if not successful (and mothership would know if the device has been being naughty.)
All this stuff shifts away from 'purity' which I value on philosophical grounds, but I'm more of an engineer than a philosopher. It would be a minor miracle if Bitcoin survives the years long three-pronged Vessenes/Andreson/Hearn attack so I'll bend my principles a bit to try to provoke that outcome. Especially if there was a clear path to such unsightly things being temporary.
I agree with some of this and while I think 20MB is fine , I don't like the notion of automatically incorporating code to keep raising the limit within this hard fork. It isn't exactly clear if this is what is intended of those tests are merely hypothetical tests for the future. If we have to keep increasing the block limit,fine, but I prefer the resistance of a hard fork each time to encourage us to use other solutions first and foremost. My concerns have more to do with bandwidth, TOR, and network propagation than disk space and there are some legitimate concerns if we don't balance this right.
I am also open to the idea of making the adjustments scale slowly up to 20, or taking a previous 2 week average and multiplying the average by 3 times to dynamically set the block limit as others have suggested with a upper hard cap between 20-50MB while we see if sidechains can provide another solution.
Unfortunately, I don't believe there is time to properly test and incorporate merkle tree pruning before we need to increase the block limit, but that should be a focus as well as using invertible Bloom filters .