I think this shows great consideration and judgement because I note and emphasize the following:
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different ... since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
What I think is of greatest value in Gavin's quote is that it's inclusive of data from the field. It's not him unilaterally saying the new size will be X, deal with it. Instead he essentially says the new size
can be X
if Y and Z are also true. It appears he has a regard for the ability of the market to decide. Indeed no change remains an option and is actually the default.
Think through this voting proposal carefully: can you think of a way to game the vote?
I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here:
My off-the-cuff guess (may be wrong) for a solution was: if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years } [Other devs comment: too fast!] That might be too fast, but the point is, not feedback based nor directly miner controlled.
I think the above could be a great solution (though I tend to agree it might be too fast). However, implementing it now will meet resistance from someone feeling it misses their views. If Satoshi had implemented it then it wouldn't be an issue now. We would simply be dealing with it and the market working around it. Now however there is a lot of money tied up in protocol changes and many more views about what should or shouldn't be done. That will only increase, meaning the economic/financial damage possible from ungraceful changes increases as well.
There was some heated IRC discussion on this point - gmaxwell made the excellent comment that any type of exponentially increasing blocksize function runs into the enormous risk that the constant is either too large, and the blocksize blows up, or too small, and you need the off-chain transactions it was supposed to avoid anyway. There is very little room for error, yet changing it later if you get it wrong is impossible due to centralization.
I also note early in Jeff's post he says he reversed his earlier stance, my point here being people are not infallible. I actually agree with his updated views, but what if they too are wrong? Who is to say? So the same could apply to Gavin. That's why I think it's wise he appears to include a response from the market in any change, and no change is the default.
Same here. I read Satoshi's predictions about achieving scalability through large blocks about two years ago, and simply accepted it as inevitable and ok. It was only much, much later, that I thought about the issue carefully and realized how dangerous the implications would be to Bitcoin's decentralization.
I am already concerned about the centralization seen in mining. Only a handful of pools are mining most of the blocks, so decentralization is already being lost there. Work is needed in two areas before the argument for off-chain solutions becomes strong: first blockchain pruning, secondly, initial propagation of headers (presumably with associated utxo) so that hashing can begin immediately while the last block is propagated and its verification done in parallel. These would help greatly to preserve decentralization.
(snip)
You mention your background in passing, so I will just mention mine. I spent many years at the heart of one of the largest TBTF banks working on its equities proprietary trading system. For a while 1% of the shares traded globally (by value) was our execution flow. On average every three months we encountered limitations of one sort or another (software, hardware, network, satellite systems), yet every one of them was solved by scaling, rewriting or upgrading. We could not stand still as the never-ending arms race for market-share meant that to accept a limitation was to throw in the towel.
It's good that you have had experience with such environments, but remember that centrally run systems, even if the architecture itself is distributed, are far, far easier to manage than truly decentralized systems. When your decentralized system must be resistant to attack, the problem is even worse.
As an example, consider your (and Mike's) proposal to propagate headers and/or transaction hash information, rather than full transactions. Can you think of a way to attack such a system? Can you think of a way that such a system could make network forking events more likely? I'll give you a hint for the latter: reorganizations.
It's my impression that the 2009 Satoshi implementation didn't have a block size limit - that it was a later addition to the reference client as a temporary anti-spam measure, which was left in until it became the norm.
Is this impression incorrect?
Bitcoin began with a 32MiB blocksize limit, which was reduced to 1MB later by Satoshi.