If there were an easy way to have the block time automatically adjust itself to usage, such an algorithm would probably set it very high right now and then decrease if and when usage increases.
This is an interesting idea I've pondered myself. The blocktime is a 'guess' by the software which aims to achieve a certain blocktime by looking at the current hash rate of the network and assigning a difficulty to the next problem which
should be solved in approximately the desired blocktime, right?
So what would be wrong with looking at total transactions attempted instead and adjusting the difficulty to target something between 2 minutes and 10 minutes based on transaction volume? I think we can agree that 2 minutes is about as fast as blocks should be targeted, given current network technology. I think it's also pretty accepted that more than 10 minutes isn't necessary [and could be dangerous if the network experiences a sudden loss of mining power]. Let's say that currently the network only sees <1 transaction a second and as a result of this sets the block time to the maximum of 10 minutes. Each successive 'target blocktime' is calculated based on the current attempted transactions this block. If it sees a massive and sudden influx of transactions, it retargets a new blocktime very quickly - the very next cycle. If it sees a slow increase in transaction volume it will gradually adjust it's difficulty multiplier so that instead of a 10 minute target we move down to 9, then 8.... all the way down to the arbitrary 2 minute minimum. (with no minimum we would open up the network to possible attack by way of people attempting to force the blocksize too low with too many transactions, but setting a minimum seems possible).
Thoughts on this? Has this idea already been explored elsewhere?