Actually, forget my earlier "heartbeat" block size. I have a better idea.
...
All that needs to happen is allow the 1MB to be replaced by a capping algorithm which just keeps pace ahead of demand. ...
I think this is right. It's effectively not a cap at all just like the U.S. debt ceiling. The problem with the debt ceiling is people, at least prior, were not paying attention, but there is a check in place - raising the ceiling requires a vote.
Increasing block size could happen the same way, but instead of congressmen ignorant of economics and/or apathetic of votes, miners have financial incentive to vote responsibly.
I think a brilliant idea of Gavin's is this:
A hard fork won't happen unless the vast super-majority of miners support it.
E.g. from my "how to handle upgrades" gist
https://gist.github.com/gavinandresen/2355445Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)
Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:
New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.
Checking for version numbers IMO is how almost
all network changes should be handled - if a certain percentage isn't compliant no change happens. Doing this would have prevented the recent accidental hard fork. It's what I call an anti-fork ideology. Either we all move forward the same way or we don't change at all. That's important given the economic aspects of Bitcoin.
So we use this model also to meter block size. One of the points in the debate is future technological advances can be an accommodating factor for decentralization, but that's unfortunately unknown. No problem, let the block size increase by polling to see what miners can handle.
Think of a train many many boxcars long. Maybe the biggest most impressive boxcars are upfront near the engine powering along, but way back are small capacity cars barely staying connected. To ensure no cars are lost even the smallest car has powerful brakes that can limit the speed of the entire train.
Gavin's earlier thoughts are close:
.. (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) ...
The problem here is within a network of increasingly centralized mining capacity the median size of most any number of blocks will always be too high to account for small scale miners, allowing larger limits by default.
Instead we make it more like that train. The network checks for the
lowest block limit (maybe in 100MB increments) announced by at least say 10% of all blocks every thousand blocks (or whatever). It can't be the absolute lowest value found at any given time since some people will simply not change by neglect. However, I think 10% or so sends a clear signal people are not ready to go higher. At the same time all miners have financial incentive to allow higher capacity as soon as possible due to fees they can collect.
This method would keep the block size ideal for decentralization as long as there was good decentralization of miners. So it's like the 51% attack rationale - centralized miners could only become monopolies by controlling nearly 100% of all blocks found.