Ideally it should be possible to compute difficulty updates based purely on the blockchain.
One way to measure block propagation time is orphan rates. At the moment, there is only a link to the previous block. However, you could add a 2nd optional orphan link.
I suggest the following protocol update.
In the coinbase transaction add an optional "ORP" field, like the pay to script hash system.
/ORP<32 bytes hash of orphan>/ means that there was an orphan with the given hash within the current difficulty period.
/ORP/ would just be a show of support, and optional
Miners should reject blocks which provides a 32 byte hash, if the hash doesn't target a first order orphan within the last difficulty period (each orphan can be targeted once).
Orphan headers must be stored by nodes, but the block data can be discarded. This info is required to verify that the orphan was real.
If 95% of the network adds the ORP field, then larger than 1MB blocks would be allowed. This could be a "point of no return" event.
However, since the changing the maximum block size is a hard fork anyway, maybe a separate vote isn't required.
If the orphan rate was less than 7.5%, then MAX_BLOCK_SIZE would increase by 10% and if more than 15% would drop by 10%. The exact numbers are open to discussion. Updates would happen at the end of a difficulty period.
Most blocks would just have the /ORP/ field at most, so it doesn't use up much of the coinbase.
What would be great would be if unbalanced merkle trees were allowed, so that you don't need to bury the coinbase so deep in the merkle tree.
If miners wanted to increase the block size, a mining cartel could try to enforce a rule about ORP blocks. If > 50% refused to include links to orphans then all other miners might decide to drop them.
However, users of the network would see that orphans were happening and not being included. Pools which agree to include them could try to draw off hashing power.
The key point is that it puts a clear definition of network problems into the blockchain. High orphan rates are inherently linked to bandwidth being the limiting factor.
[Edit]
It might be worth also adding a rule that the median block size within the difficulty period must be > 50% of MAX_BLOCK_SIZE or no increase happens. This is to prevent the limit growing during times when the limit wasn't being used anyway.
it seems airtight. I cant think of any way that an attacker could artificially effect the orphan rate with out great cost, cost that would be more than prohibitive enough. Do you know if the devs have anything like this on their radar?