Some ideas to throw into the pile:
Idea 1: Quasi-unanimous forking.
If the block size for is attempted, it is critical to minimize disruption to the network. Setting it up well in advance based on block number is OK, but that lacks any kind of feedback mechanism. I think something like
if( block_number > 300000) AND ( previous 100 blocks are all version > 2)
then { go on and up the MAX_BLOCK_SIZE }
Maybe 100 isn't enough, if all of the blocks in a fairly long sequence have been published by miners who have upgraded, that's a good indication that a very large super-majority of the network has switched over. I remember reading something like this in the qt-client documentation (version 1 -> 2?) but can't seem to find it.
Alternatively, instead of just relying on block header versions, also look at the transaction data format version (first 4 bytes of a tx message header). Looking at the protocol it seems that every tx published in the block will also have that version field, so we could even say "no more than 1% of all transactions in the last 1000 blocks of version 2 means it's OK to switch to version 3".
This has the disadvantage of possibly taking forever if there are even a few holdouts (da2ce7?
), but my thinking is that agreement and avoiding a split blockchain is of primary importance and a block size change should only happen if it's almost unanimous. Granted, "almost" is ambiguous: 95%? 99%? Something like that though. So that anyone who hasn't upgraded for a long time, and somehow ignored all the advisories would just see blocks stop coming in.
Idea 2: Measuring the "Unconfirmable Transaction Ratio"
I agree with gmaxwell that an unlimited max block size, long term, could mean disaster. While we have the 25BTC reward coming in now, I think competition for block space will more securely incentivize mining once the block reward incentive has diminished. So basically, blocks should be full. In a bitcoin network 10 years down the road, the max_block_size should be a limitation that we're hitting basically every block so that fees actually mean something. Lets say there are 5MB of potential transactions that want to get published, and only 1MB can due to the size limit. You could then say there's a 20% block inclusion rate, in that 20% of the outstanding unconfirmed transactions made it into the current block.
I realize this is a big oversimplification and you would need to more clearly define what constitutes that 5MB "potential" pool. Basically you want a nice number of how much WOULD be confirmed, except can't be due to space constraints. Every miner would report a different ratio given their inclusion criteria. But this ratio seems like an important aspect of a healthy late-stage network. (By late-stage I mean most of the coins have been mined) Some feedback toward maintaining this ratio would seem to alleviate worries about mining incentives.
Which leads to:
Idea 3: Fee / reward ratio block sizing.
This may have been previously proposed as it is fairly simple. (Sorry if it has; I haven't seen it but there may be threads I haven't read.)
What if you said:
MAX_BLOCK_SIZE = 1MB + ((total_block_fees / block_reward)*1MB)
so that if the block size would scale up as the multiple of the reward. So right now, if you wanted a 2MB block, there would need 25BTC total fees in that block. If you wanted a 10MB block, that's 250BTC in fees.
In 4 years, when the reward is 12.5BTC, 250BTC in fees will allow for a 20MB block.
It's nice and simple and seems to address many of the concerns raised here. It does not remove the freedom for miners to decide on fees -- blocks under 1MB have the same fee rules. Other nodes will recognize a multiple-megabyte block as valid if the block had tx fees in excess of the reward (indicative of a high unconfirmable transaction ratio.)
Problems with this is it doesn't work long term because the reward goes to zero. So maybe put a "REAL" max size at 1GB or something, as ugly as that is. Short / medium term though it seems like it would work. You may get an exponentially growing max_block size, but it's really slow (doubles every few years). Problems I can think of are an attacker including huge transaction fees just to bloat the block chain, but that would be a very expensive attack. Even if the attacker controlled his own miners, there's a high risk he wouldn't mine his own high-fee transaction.
Please let me know what you think of these ideas, not because I think we need to implement them now, but because I think thorough discussion of the issue can be quite useful for the time when / if the block size changes.