...
There are issues among such a group of people that we probably can't hope to understand. The politics, the interpersonal clashes, the miscommunications, the lingering grudges tinting things. Again, I think your implication may be right, this may be more a social issue than a technical one.
Interesting. I decided to read all Greg's recent posts and found
one which makes his position much clearer.
There is a soft blocksize limit in addition to the hard one. Originally it wasn't easy to adjust. We ran into the soft limit, and were pushing into it for months at a time back at the end of 2012 and beginning of 2013. Transactions slowed down and there was some complaining, but the wheels did not fall off and Bitcoin's adoption grew substantially during that time. A lot of technical innovation happened then-- in particular replace-by-fee was invented and child-pays-for-parent was deployed. After the soft limits were increased, development on these improvements went fallow, sadly (e.g. CFPF was never merged, or matured to a merge ready state, in Bitcoin Core).
The experience we have says there will not likely be a dire emergency. We also have reason to believe, from the prior accidental quasi-hardfork, that the mining portion of the network can be updated within a day or two during an actual emergency. A straight-forward blocksize bump also has the benefit of being completely compatible with existing SPV clients (they can't see the blocksize). If there really were a dire situation where it was larger blocks or doom-- I'm confident that larger blocks could be deployed quickly without much trouble; and in that kind of situation: consensus would be easy. No matter how concerned people are about larger sizes, if the choice is really larger or a useless network, the former is preferable to everyone. There is also plenty of room for other creativity, as we saw before in 2013, should the need arise, but it can be hard to predict in advance.
He doesn't really want to engage in the mechanics of a block size increase right now, because his opinion is that the limit can be safely maxed out. I wish he had said this a couple of years ago in BCT because then the pros-and-cons of letting the limit be hit could have been hammered out before the question of how to change the limit needed addressing (which is the case now). Maybe he didn't say this two years ago because it wasn't his position then?
Compromise is very difficult when one side does not recognize that an urgent problem, or even just that a problem, exists.
He is looking at it from a very technical level. Advantages of hitting the limit is speeding the development of some software components, work in adversity etc, the downside is non-technical: a PR disaster, collapsing price, loss of VC enthusiasm, academics noting that a decentralized community cannot be trusted to manage a global currency. IMHO, these downsides far outweigh the technical, software benefits. It is simply playing with fire.
He also forgets that not all mining pools obeyed the soft-limits in 2013. A few didn't. e.g. when the soft-limit was 250KB Eligius was mining 500, when the soft-limit was 500 Eligius mined 750. There was always some level of extra capacity which does not exist at 1MB.
He repeatedly says that he is worried about centralization, loss of nodes. Well, the fact is the fastest way to kill
*off the most nodes at once is to quickly deploy a hard-fork. A hard-fork needs as much time as possible to be as smooth as possible, maintaining the network intact.
*Nodes left on an old fork are not guaranteed to upgrade, some will just switch off.
Yes again you can see Greg as the nerd in the basement with his trains, optimizing all the gears and mechanisms, but totally oblivious to what is going on in the kitchen.
Btw, I had trains in the basement. But I did come up from the dungeon eventually.