Okay, I think this will be my final word on this (if anyone cares or is listening).
Meh.
Any amount of algorithms will just be guesswork that could be wildly wrong— A small error in the exponent makes a big difference: If computer's ability to handle the blocks doubles in 18 months instead of 12 then ten years the size increase will be 1024 : 101— blocks 10x too big to handle.
And "median of blocks" basically amounts to "miner's choose"— if you're going to have miners choose, I'd rather have them insert a value in the coinbase and take the median of that... then at least they're not incentivized to bloat blocks just to up the cap, or forced to forgo fees to their competition just to speak their mind on making it lower.
But miners choose is ... not great because miners alignment with everyone else is imperfect— and right now the majority of the mining decision is basically controlled by just two people.
There are two major areas of concern from my perspective: The burden on unpaid full nodes making Bitcoin not practically decentralized if it grows faster than technology, and a race to the bottom on fees and thus PoW difficulty making Bitcoin insecure (or dependent on centralized measures like bank-signed-blocks). (and then there are then some secondary ones, like big pools using large blocks to force small miners out of business)
If all of the cost of being a miner is in the handling of terabyte blocks and none in the POW, an attacker who can just ignore the block validation can easily reorg the chain. We need robust fees— not just to cover the non-adaptive "true" costs of blocks, but to also cover the POW security which can adapt as low as miners allow it.
The criteria you have only speak to the first of these indirectly— we HOPE computing will follow some exponential curve— but we don't know the exponent. It doesn't speak to the second at all unless you assume some pretty ugly mining cartel behavior to fix the fee by a majority orphaning valid blocks by a minority (which, even if not too ugly for you on its face would make people have to wait many blocks to be sure the consensus was settled).
If you were to use machine criteria, I'd add a third: It shouldn't grow faster than (some factor of) the difficulty change. This directly measures the combination of computing efficiency and miner profitability. Right now difficulty change looks silly, but once we're comfortably settled w/ asics on the latest silicon process it should actually reflect changes in computing... and should at least prevent the worst of the difficulty tends to 1 problem. (though it might not keep up with computing advances and we could still lose security). Also, difficult as a limit means that miners actually have to _spend_ something to increase it— the median of size thing means that miners have to spend something (lost fees) to _decrease it_...
Since one of my concerns is that miners might not make enough fees income and the security will drop, making it so miners have to give up more income to 'vote' for the size to go down... is very not good. Making it so that they have to burn more computing cycles to make it go up would, on the other hand, be good.
every 2016 blocks:
new_max_size = min( old_size * 2^(time-difference/year),
max(max(100k,old_size/2),median(miner_coinbase_preferences[last 2016 blocks])),
max(1mb,1MB*difficulty/100million),
periodic_hard_limit (e.g. 10Mbytes),
)
(the 100m number there is completely random, we'll have to see what things look like 8 months once the recent difficulty surge settles some, otherwise I could propose something more complicated than a constant there.)
Mind you— I don't think this is good by itself. But the difficulty based check improves it a lot. If you were to _then_ augment these three rules with a ConsentOfTheGoverned hard upper limit that could be reset periodically if people thought the rules were doing the right thing for the system and decentralization was being upheld... well, I'd run out of things to complain about, I think. Having those other limits might make agreeing on a particular hard limit easier— e.g. I might be more comfortable with a higher one knowing that there are those other limits keeping it in check. And a upper boundary gives you something to test software against.
I'm generally more of a fan rule by consent— public decisions suck and they're painful, but they actually measure what we need to measure and not some dumb proxies that might create bad outcomes or weird motivations— there is some value which is uncontroversial, and it should go to that level. Above that might be technically okay but controversial, and it shouldn't go there. If you say that a public set limit can't work— then you're basically saying you want the machine to set behavior against the will and without consent of the users of the system, and I don't think thats right.