Pages:
Author

Topic: Difficulty adjustment needs modifying - page 6. (Read 10435 times)

legendary
Activity: 905
Merit: 1012
October 02, 2011, 05:29:33 PM
#17
Unless I'm seriously misunderstanding, no one is discussing asymmetric adjustments here..
kjj
legendary
Activity: 1302
Merit: 1026
October 02, 2011, 05:25:17 PM
#16
Allowing asymmetric adjustments can lead to security problems.  Artforz described a few when this came up on alternate chains.
legendary
Activity: 2128
Merit: 1073
October 02, 2011, 02:26:09 PM
#15
What about ordering blocks with respect to time (just for the purposes of this calculation)?
I think it is still risky. The risk would not be in out-of-order blocks but blocks-with-the-same-timestamps and a subsequent zerodivide or blocks-with-almost-the-same-timestamps and total numerical loss of accuracy.

If you can provide code or pseudo-code of or even just MATLAB code for this, I can bring it into an altchain for testing. I didn't pay attention enough in controls class to follow the discussion here :\
Yeah, I was thinking about too. My copy of MATLAB is on an SGI O2 that is in the storage. I would not be able to deal with this yet. If I find some backups on a non-IRIX disk I'll post.
legendary
Activity: 905
Merit: 1012
October 02, 2011, 12:48:10 PM
#14
If the block time was changed from the current esotheric one to something which is monotonically increasing (like NTP time), then the change in the difficulty feedback loop could be considered and implemented with safety.
What about ordering blocks with respect to time (just for the purposes of this calculation)?

From this I can see that it is possible to synthesize an absolutely stable 5-octave multirate filter bank that would work in front of a 63-times subsampler. There are two problems
for which I don't know the answer:
1) difficulty uses some custom floating point format with very low precision that may cause limit-cycle oscillations in a theorethically stable filter. If the format uses correct rounding then this wouldn't matter.
2) making changes to the difficulty 32 times more often would increase the probability of network splitting into disjoint block-chains, but I don't know how to estimate that.
If you can provide code or pseudo-code of or even just MATLAB code for this, I can bring it into an altchain for testing. I didn't pay attention enough in controls class to follow the discussion here :\
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 02, 2011, 07:21:28 AM
#13
Hmm, firstly I guess most notice, but I did not say to do away with the 2016 calculation.

What I am suggesting is specifically a way to handle large swings in the mining community and only that.
That's the reason for firstly waiting a full 144 blocks (normally ~1 day) before allowing it to even be checked and also for ignoring any changes less than 50%

I'm not talking about a general algorithm that would affect the normal working of the difficulty adjustment, but only actually have an effect if something drastic happened (i.e. network hash rate changed by at least 50%)

Namecoin has even had this specific problem happen already (and some of the scamcoins - I mean - alternate coins - have also seen this happen)

It's not something that would normally effect the difficulty adjustment calculation
i.e. it would give 'false' in any situation but some drastic network change

That's the reasoning behind the 50% check (and also of course the 144 block delay before even checking helps somewhat with network statistical variance)

At the moment the 2 week estimate is -9% (well I think that's right from the irc bot in the channel I visit) and that's still less than 1/5 of the value necessary for any early change to occur

The point is to have something that cuts in if something does go wrong but normally has no effect.

Waiting for it to happen first and then making a quick hack in bitcoin is IMO a bad idea, better to come up with something beforehand and yes discuss it as well.
However, the point of the original idea is to come up with a reasonable intervention and of course code that doesn't intervene except when needed.
legendary
Activity: 2128
Merit: 1073
October 02, 2011, 05:13:09 AM
#12
I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is not an approximation for I, the integral from -infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic.
Let me say what I said using a different terminology. The current regulator approximates a PI using a 2016-tap low-pass FIR filter with a sampe-and-hold (0-th order extrapolator) doing 2016 times subsampling of the output from the FIR.
D would rapidly adapt to abrupt changes, basically what the OP is suggesting.
Those are the famous last words. There were the days when average student of engineering would be splashed with an ink from the paper-tape analog data logger after cranking up the D term during his lab work. Nowadays I don't know how people learn the basics of system stability.

ArtForz had shown on one of the alternate chains both persistent oscillation and lack of assymptotic stability when people implement the diff-operator or some non-linear and/or time-varying approximations to a differentiator.

I know of the only one thing that could be safely done without fixing the problem of acausality or non-monotonicity of the time stamps.
Quote
$ factor 2016
2016: 2 2 2 2 2 3 3 7
From this I can see that it is possible to synthesize an absolutely stable 5-octave multirate filter bank that would work in front of a 63-times subsampler. There are two problems
for which I don't know the answer:
1) difficulty uses some custom floating point format with very low precision that may cause limit-cycle oscillations in a theorethically stable filter. If the format uses correct rounding then this wouldn't matter.
2) making changes to the difficulty 32 times more often would increase the probability of network splitting into disjoint block-chains, but I don't know how to estimate that.
legendary
Activity: 2128
Merit: 1073
October 02, 2011, 04:19:55 AM
#11
I did some simulation of blocktime variance (assuming honest nodes,
It is my understanding that at least Eligius pool isn't a "honest node" and intentionally produces acausal blocks (or at least as close to acausal as they deem practical).
donator
Activity: 2058
Merit: 1054
October 02, 2011, 02:46:40 AM
#10
Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.
Scenario 1: Something really bad happens and the Bitcoin exchange rate quickly drops to a tenth of its previous value. This is when mining was already close to breakeven, so for most miners it is no longer profitable to mine and they quit. Hashrate drops to a tenth of the previous value, blocks are found every 100 minutes, retargeting is in 5 months.

Scenario 2: Someone breaks the hashing function or builds a huge mining cluster, and uses it to attack Bitcoin. He drives the difficulty way up and then quits.

Scenario 3: It is decided to change the hashing function. Miners want to protect their investment so they play hardball bargaining and (threaten to) quit.

These can all be solved by hardcoding a new value for the difficulty, but wouldn't it be better to have an adjustment algorithm robust against this? Especially considering most Bitcoin activity will freeze until a solution is decided, implemented and distributed.

Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included.
...
If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = -infinity).
I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is not an approximation for I, the integral from -infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic.

P is what exists currently.
I would fix problems with long-term trends. Currently, halving is done every less than 4 years because of the rising difficulty trend.
D would rapidly adapt to abrupt changes, basically what the OP is suggesting.

It's possible that the existing linear control theory doesn't apply directly to the block finding system and that P, I and D are only used metaphorically.
People who understand this stuff should gather and design a proper difficulty adjustment algorithm with all these components.

I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals, posed on the freicoin forums. The nifty chart is copied below. The difference between one-week and two-week intervals was negligible, and is what I would recommend if a shorter interval was desired.

A 24-hour interval (144 blocks) would have a variance/clock skew of 8.4%--meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1-week retarget would have 3.8% variance. Twice-weekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that..
We're not talking about using the same primitive algorithm with a shorter timespan. We're talking about either a new intelligent algorithm, or a specific exception to deal with emergencies.
legendary
Activity: 905
Merit: 1012
October 02, 2011, 01:41:40 AM
#9
I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals, posed on the freicoin forums. The nifty chart is copied below. The difference between one-week and two-week intervals was negligible, and is what I would recommend if a shorter interval was desired.



A 24-hour interval (144 blocks) would have a variance/clock skew of 8.4%--meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1-week retarget would have 3.8% variance. Twice-weekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that..
legendary
Activity: 2128
Merit: 1073
October 02, 2011, 01:19:27 AM
#8
Then where do you get the integral component from the difficulty re-calculation? To me it looks like a simple proportional scaling algorithm.
If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = -infinity).

Due to the esotheric block timekeeping the typical discrete-time transformation cannot be applied. If I remember correctly the block chain can be extended by a block up to two hours in the past. Because of that process under control is marginally acausal. This esotheric timekeeping precludes use of any standard mathematical tool from the theory of control systems.

So if you wanted to implement a discrete time approximation of P controller, you would measure the error over a single interblock interval. In case of non-monotonic block time you would have to set difficulty to some negative number.

PID, PI and P are terms from the the theory of Linear Time-Invariant systems and pretty much nobody deals with acausal systems. I agree that PID controllers are not that difficult to fine tune. But any tool (theorethic or practical) do do such tuning assumes causality. Moreover, the PID controller design with discrete time but variable time step are quite complex mathematically.

If the block time was changed from the current esotheric one to something which is monotonically increasing (like NTP time), then the change in the difficulty feedback loop could be considered and implemented with safety.
legendary
Activity: 1232
Merit: 1076
October 02, 2011, 12:31:49 AM
#7
Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in.
[...]
Comments? Suggestions?
I have a semi-constructive suggestion:

Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included.

http://en.wikipedia.org/wiki/Lyapunov_stability
http://en.wikipedia.org/wiki/PID_controller

I may have some software engineering disagreement with some of the Satoshi's choices, but the choice of PI regulator to adjust the difficulty is an example of excellent engineering: PI-s may be slow, but they are absolutely stable for every causal process and for every error signal.



PID controllers are completely standard, and the reason to use them (from my experience) is that they can be very easily fine tuned to optimal cybernetic states.

Can you explain your reasoning though for arguing that the difficulty adjustment is a PI controller? P = nActualTimespan / nTargetTimespan? Then where do you get the integral component from the difficulty re-calculation? To me it looks like a simple proportional scaling algorithm.
legendary
Activity: 2128
Merit: 1073
October 01, 2011, 11:49:28 PM
#6
Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in.
[...]
Comments? Suggestions?
I have a semi-constructive suggestion:

Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included.

http://en.wikipedia.org/wiki/Lyapunov_stability
http://en.wikipedia.org/wiki/PID_controller

I may have some software engineering disagreement with some of the Satoshi's choices, but the choice of PI regulator to adjust the difficulty is an example of excellent engineering: PI-s may be slow, but they are absolutely stable for every causal process and for every error signal.

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 01, 2011, 11:30:15 PM
#5
-7% seems a lot ...-

So making no changes each block takes 10 minutes and 36 seconds, on average, for the 15 days, instead of 10 minutes.  Problem?
No problem ... and that is not what I was suggesting either ... Tongue
Is that why you ignored my question? You didn't read the 1st post?
legendary
Activity: 2506
Merit: 1010
October 01, 2011, 11:15:36 PM
#4
-7% seems a lot ...-

So making no changes each block takes 10 minutes and 36 seconds, on average, for the 15 days, instead of 10 minutes.  Problem?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 01, 2011, 11:03:13 PM
#3
The last and next difficulty adjustment

Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.

The last four difficulty adjustment periods each were between fourteen and fifteen days in duration:
 - http://docs.google.com/spreadsheet/ccc?key=0AmcTCtjBoRWUdHVRMHpqWUJValI1RlZiaEtCT1RrQmc&authkey=CMrV-rYE&hl=en_US&authkey=CMrV-rYE

That means instead of 10 minutes, blocks, on average, took 10 minutes and a few seconds.
Thus, this extra adjustment wouldn't take effect.

However, if the problem ever did occur, it would take effect.

Are you implying that could never happen?

At the moment it's at -7% due to perceived value and a game BF3 (it was -10% earlier today)
-7% seems a lot ...
legendary
Activity: 2506
Merit: 1010
October 01, 2011, 09:58:35 PM
#2
The last and next difficulty adjustment

Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.

The last four difficulty adjustment periods each were between fourteen and fifteen days in duration:
 - http://docs.google.com/spreadsheet/ccc?key=0AmcTCtjBoRWUdHVRMHpqWUJValI1RlZiaEtCT1RrQmc&authkey=CMrV-rYE&hl=en_US&authkey=CMrV-rYE

That means instead of 10 minutes, blocks, on average, took 10 minutes and a few seconds.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 01, 2011, 08:52:57 PM
#1
The last and next difficulty adjustment (and anyone who has watched the problems with the other crypto copies and the patches they have attempted) suggests that the 2 week difficulty adjustment should have a second test to adjust it early if needed.

Basically I'd propose that from 144 blocks (~24hrs) after a difficulty adjustment, and tested then and each 12 blocks (~2hrs) after that, if the actual calculated adjustment based on all blocks since the last adjustment is higher or lower by 50% than the current difficulty, then an early difficulty adjustment should kick in.
Of course those approximate values in hours for when to do this are way out if the problem should actually occur, however, there would need to be 144 blocks to attempt to ensure it doesn't happen completely unnecessarily due to likely random probability (as opposed to unlikely random probability)

This may never be needed, however, if it is needed Bitcoin $ value will obviously drop badly if it isn't implemented.
Simple example would be a short time frame drop in the network capacity by 50% would mean that the number of days remaining of the 2 weeks, would double thus extending out the slow down of the transaction confirmation in the network and thus having all the roll on effects that this would lead to including the possible self perpetuating spiral down that it could cause.

It would (as I said) be an up or down adjustment test, not just for downward adjustments, though the problems caused by a late upward adjustment would not be as severe as a late downward adjustment, thus the up test could be removed.

Comments? Suggestions?
Pages:
Jump to: