Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.
Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly ...
Quite. My suggestion was just an example, a hint towards engaging in some lateral thinking on the exact nature of the underlying problem we are facing here. It could be mem. locks (I hope not), it could be physical RAM space, it could mem. accesses, CPU cycles, total HD storage space occupied network-wide.
Point being, we are searching for a prescription for the physical atomic limitation, as a network rule, that needs to priced by the market of miners validating transactions and the nodes storing them, so that fees are paid and resources allocated correctly in such a way that the network scales, in the way that we already know that it theoretically can.
If we are going to hard fork, lets make sure it is for a justifiable, quantifiable reason. Or we could be merely embarking on holy grail pursuit for the 'best' DB upgrades to keep scaling up, endlessly.
Bitcoin implementations could be DB agnostic if it were to use the right metric. Jeff Garzik has some good ideas like a "transactions accessed" metric, as I'm sure others do also. Maybe some kind scale-independent transactions accessed block limit and fee scheduling rule?