Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.
Scenario 1: Something really bad happens and the Bitcoin exchange rate quickly drops to a tenth of its previous value. This is when mining was already close to breakeven, so for most miners it is no longer profitable to mine and they quit. Hashrate drops to a tenth of the previous value, blocks are found every 100 minutes, retargeting is in 5 months.
Scenario 2: Someone breaks the hashing function or builds a huge mining cluster, and uses it to attack Bitcoin. He drives the difficulty way up and then quits.
Scenario 3: It is decided to change the hashing function. Miners want to protect their investment so they play hardball bargaining and (threaten to) quit.
These can all be solved by hardcoding a new value for the difficulty, but wouldn't it be better to have an adjustment algorithm robust against this? Especially considering most Bitcoin activity will freeze until a solution is decided, implemented and distributed.
Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included.
...
If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = -infinity).
I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is
not an approximation for I, the integral from -infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic.
P is what exists currently.
I would fix problems with long-term trends. Currently, halving is done every less than 4 years because of the rising difficulty trend.
D would rapidly adapt to abrupt changes, basically what the OP is suggesting.
It's possible that the existing linear control theory doesn't apply directly to the block finding system and that P, I and D are only used metaphorically.
People who understand this stuff should gather and design a proper difficulty adjustment algorithm with all these components.
I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals,
posed on the freicoin forums. The nifty chart is copied below. The difference between one-week and two-week intervals was negligible, and is what I would recommend if a shorter interval was desired.
A 24-hour interval (144 blocks) would have a variance/clock skew of 8.4%--meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1-week retarget would have 3.8% variance. Twice-weekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that..
We're not talking about using the same primitive algorithm with a shorter timespan. We're talking about either a new intelligent algorithm, or a specific exception to deal with emergencies.