Pages:
Author

Topic: Difficulty adjustment needs modifying - page 4. (Read 10423 times)

staff
Activity: 4242
Merit: 8672
October 03, 2011, 04:16:57 PM
#57
Sliding window is a interesting idea, you'd obviously have to reduce the per block adjustment factor and the limits to the 2016-th root to get the same overall adjustment speed as bitcoin, but at first look this should work without introducing new vulnerabilities and result in "smoother" adjustments (obv. not really *faster* if you use factors to end up with bitcoin-equiv adjustment rate, but less... well... blocky).

Consider this attack:   I have the ability to isolate the network connectivity many different nodes and I can pretend to be all of their peers (this isn't an especially hard attack— e.g. it's one people at large ISPs can pull)

I make a one time investment to fork the chain someplace after the highest of the currently used checkpoints, and mine enough blocks to get it down to a difficulty I can sustain on my own.  Once I've done this I can isolate fresh nodes and get them onto my fantasy chain (the fact that it's the sum-difficulty used for comparison is irrelevant because I've isolated them) and I can trigger their prudent must-have-n-confirms behavior before considering a txn safe.

Say I need to reduce it by 1024x to get it to where I can mine it on my own (which is about right, 1/1024 puts 10GH at ~11 minutes/block).  This would currently cost 134,268.75  BTC (the simple forgone income from the same amount of computation: 2016*(50/4^0) + 2016*(50/4^1) + 2016*(50/4^2) + 2016*(50/4^3) + 2016*(50/4^4.)).

If you switch to a sliding window with the same overall behavior your change clamp at each block will need to be at 0.25^1/2016.  You would still need to mine ~10080 blocks but total cost would be 72615 BTC because of the far fewer blocks calculated at the 'too high' difficulty.

So it would ~halve the cost of this attack.

The clamps in bitcoin are what make these attacks costly,  but the clamps also represent exciting non-linearities in the payout of the system which miners could exploit for increased profits.  The fact that the clamps are hard to reach currently makes it a non-issue, but with a sliding window the clamps would have to be very near a factor of 1.0 to preserve the resistance to these forged chain attacks, so the system would almost always be operating in the non-linear region.

Tightening the clamps to keep the attack cost the same would only worsen the non-linearity.

Moreover, (ignoring the screwed up calculation) the window plus node timestamp enforcement (the limitation against blocks from the future) limits the maximum gain miners miners can get from lying about the time to a couple percent.  A sliding window would make this worse because it would provide an incentive to lie for every block, and not just the final ones in a cycle.

People need to stop fixating on weirdness in other chains which are more or less irrelevant for bitcoin and realize that the design of bitcoin isn't an accident. Every one of the features of the distributed algorithm has a complicated relationship with all the others.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 03, 2011, 04:00:08 PM
#56
Though, any change to the difficulty algo would require everyone to upgrade anyway.

You'd just have to put it in a specific version (e.g. 4.x release) with the new calculation only being used after block N some time in the future.

As for the 2015/2016 mistake, well that would have to work the same way, starting from some future block after a new release (but using the old count of 2015 timeframes for before that)

The code fix would be to simply count back 2016 instead of 2015 blocks since the time gap ignored (from the previous 2016 to 2015) is the current difficulty also - though the side effect would be to slightly decrease the difficulty - since the current calculation is ever so slightly high
(the correction will make the 2016 time longer thus the difficulty correction will be calculated to be smaller)

Hmm - need an incentive for everyone out there?
"The difficulty calculation in slightly high - upgrade so everyone can get the new corrected value that is slightly lower" Smiley
sr. member
Activity: 406
Merit: 257
October 03, 2011, 02:45:47 PM
#55
This is getting OT, but judging by the currently seen client versions on the network you'd probably need a prewarning/transitional period measured in years to have a good chance at all old clients upgrading in time...

Back on the original topic, ran some sims, doesn't look like there's a hard point where a faster difficulty adjustment using the original integrator becomes unstable (short of going to extremes so it slams into the limits from normal fluctuations in hashrate), "merely" makes difficulty adjustment more noisy. Near identical results with a sliding window and scaled factors (looks smoother, but pretty much the same deviation from optimal (as expected)).
For a interesting practical experiment in "just how short can you make the window and how extreme the adjustment", look at GeistGeld chain (retarget every 16 blocks, adjustment limit roughly * / 1.8. that comes out to a adjustment limit of */ somewhere around 1e30 over 2016 blocks). Somewhat surprisingly it more-or-less works so far...
legendary
Activity: 1204
Merit: 1015
October 03, 2011, 02:38:35 PM
#54
You can do both calculations and accept either-or during a transitionary period (and obviously do the old calculation for validation of blocks prior to the transition).
Even that wouldn't work. If the fixed calculation was ever lower than the old calculation during the transition period, there would be a permanent chain fork starting with the first block that didn't meet the old difficulty, but met the new difficulty, assuming over half of the hash power upgraded. If the fixed calculation was higher than the old calculation, nobody in their right mind would mine under the new calculation. Thus, the only solution is to permanently fork at a specific block in the future.
legendary
Activity: 905
Merit: 1011
October 03, 2011, 02:00:54 PM
#53
You can do both calculations and accept either-or during a transitionary period (and obviously do the old calculation for validation of blocks prior to the transition).
sr. member
Activity: 406
Merit: 257
October 03, 2011, 01:28:25 PM
#52
This is way off-topic from the OP... but Gavin, why couldn't ArtForz's patch be applied to core? Bumping the interval of consideration up from 2015 to 2016 will have no effect unless nodes are acting dishonestly. And this isn't far-fetched--as ArtForz points out the current situation gives miners significant incentive to collaborate and cheat.
Wrong, it's a forward-and-backward-incompatible change, as at the first retarget 2015 and 2016 have a good chance on disagreeing what the next target should be.
donator
Activity: 2058
Merit: 1054
October 03, 2011, 12:58:49 PM
#51
if we lose 90% of hashing power in a month then there that is a sign something much more serious is wrong with bitcoin than the difficulty algorithm;
"Shit happens" as they say. It would certainly mean there's something wrong, but not necessarily fatal. It will be fatal if we're not prepared. Also, merely preparing for the possibility may make it less likely by preventing a "hash run" - price drops, miners quit, transactions are confirmed more slowly, causing loss of confidence and more price drops, more miners quit...

This is way off-topic from the OP... but Gavin, why couldn't ArtForz's patch be applied to core? Bumping the interval of consideration up from 2015 to 2016 will have no effect unless nodes are acting dishonestly. And this isn't far-fetched--as ArtForz points out the current situation gives miners significant incentive to collaborate and cheat.
I'm all for making the fix but I don't think the incentive is really that great. If miners collude to generate coins ahead of schedule, it will cause a loss of confidence in Bitcoin making their mined coins worthless.
legendary
Activity: 905
Merit: 1011
October 03, 2011, 12:26:48 PM
#50
This is way off-topic from the OP... but Gavin, why couldn't ArtForz's patch be applied to core? Bumping the interval of consideration up from 2015 to 2016 will have no effect unless nodes are acting dishonestly. And this isn't far-fetched--as ArtForz points out the current situation gives miners significant incentive to collaborate and cheat.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 03, 2011, 10:43:23 AM
#49
Ah well then I guess we better hope it doesn't drop 50% for a reason other than the end of BTC.
Having nothing to handle that will almost certainly cause worse problems in that event Tongue

Current opinion of BTC, BTC $ value (and BTC $ vs electricity) and the release of BFC appear to be setting a drop of around 7% to 10% already this 2016 blocks
legendary
Activity: 1652
Merit: 2300
Chief Scientist
October 03, 2011, 10:10:15 AM
#48
Nope, bitcoin still has the off-by-1, as for now it's deemed "too big to fail" and fixing it would mean a backwards-incompatible change to the blockchain.

First: sorry for conflating the off-by-1 and the asymmetric adjustment issues.  As I said, I haven't taken the time to consider changing the bitcoin difficulty adjustment algorithm (too big a change for too little benefit, in my opinion; if we lose 90% of hashing power in a month then there that is a sign something much more serious is wrong with bitcoin than the difficulty algorithm; speculation on what will happen when the block reward drops seems pointless to me, I don't think we'll know until it happens).

Second: I've written 'discourage blocks' infrastructure:
  https://github.com/gavinandresen/bitcoin-git/tree/discourageblocks
  (code reviews welcome)

... which should give us the ability to nudge miners to Do The Right Thing.  Discouraging blocks that appear to be gaming the off-by-one bug should be enough incentive to prevent 50+% cartels from forming, without requiring a blockchain-splitting change.
donator
Activity: 2058
Merit: 1054
October 03, 2011, 08:36:45 AM
#47
So back on topic... This idea actually looks promising, I yet have to run the numbers but I suspect it'll work "economically" as long as you keep the total adjustment possible over X blocks symmetrical for up vs. down (or even allow quicker up than down adjustment). Still somewhat worried about poisson noise causing issues for a faster-adjusting algo, but I guess that *should* average out over time...
I ran some simulations, I don't think there's anything wrong with it but by itself it won't solve the problem. I tried m=1 (adjustment every block), n=2016 (ratio calculated over the past 2016 blocks) and correction of ratio^(m/n). It takes 600 days to recover from a x100 drop in hashrate. Using a correction of ratio^(2m/n) it's still stable and improves it to 400 days. Maybe it's a matter of finding the highest stable exponent.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 03, 2011, 08:07:13 AM
#46
Ah OK your referring to the fact that it's 2015 time ranges between 2016 blocks - typical start/end point issue
Since it starts from the first block after the last difficulty and counts that as 1 of 2016 which again gives 2015 time ranges or in average expected time, 2 weeks minus 10 minutes.
Don't worry I wont ask for the explanation of the problem that causes (but I'd guess it relates to alternates, not bitcoin though my guess is probably wrong)
sr. member
Activity: 406
Merit: 257
October 03, 2011, 07:39:09 AM
#45
It's taking the time over 2015 blocks. Every 2016 blocks. Last time I checked, 2015 != 2016.
In case you still don't get it, is the time taken from the last block with diff X to the first block with diff Y counted? Really?
But honestly, thats just making a elephant of a simple implementation flaw, nothing to do with the theoretical properties of the proper algorithm.
So back on topic... This idea actually looks promising, I yet have to run the numbers but I suspect it'll work "economically" as long as you keep the total adjustment possible over X blocks symmetrical for up vs. down (or even allow quicker up than down adjustment). Still somewhat worried about poisson noise causing issues for a faster-adjusting algo, but I guess that *should* average out over time...
donator
Activity: 2058
Merit: 1054
October 03, 2011, 07:24:11 AM
#44
Wouldn't it be easiest to keep the original retarget algo (with 2016 block window etc), but just do the retargetting more often so that retarget windows overlap? You could retarget every block if you wanted (or every 10 or whatever).

With system like this, for example Namecoin difficulty would have dropped to "profitable" levels a long long time ago, and it would have suffered only a temporary slowdown.

Is there a downside to a system like this that I don't see? Some problem it doesn't solve or some exploit it introduces? It seems so simple :-)
This only has limited effectiveness against the doomsday problem. Even if you retarget on every block, you still have to wait for that first block which could take a while if things are really bad. And as ArtForz said you need to take a root of the correction (otherwise it explodes), so even after the block is found the correction is not enough.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 03, 2011, 07:16:22 AM
#43
Well - reading through main.cpp

CBlock::AcceptBlock()
{
 checks for a dup hash (yeah that's an odd check to do still not sure why it does that)

 get the previous block

 height of this will be prev + 1

 check block nBits stated difficulty = GetNextWorkRequired(previous block)

 etc
}

GetNextWorkRequired(prev)
{
 if (height + 1) isn't a multiple of 2016
   return prev's difficulty

 otherwise find block old=2015 ago and use time diff of prev-old to work out new difficulty
}

Now unless CBlock::AcceptBlock() is the wrong function, I don't see how that can accept the first incorrect difficulty block.
Where is the off by 1 issue in what I'm looking at?

I'm not saying it isn't there, I'm simply saying I don't see it.

Not sure of the issues of negative time changes,
but it works it out based on the previous block compared to the block 2015 ago, not the current block (also)

Of course, if my change was added, then of course that's a whole different can of worms but my change only makes it change down to 144 blocks in the worst case instead of 2016 blocks

Edit: I guess once per 2016 blocks you could step it down 1/4 but then next 2016 would step it back up anyway
The only issue I do see is the negative time issues and that's real easy to ad a check if necessary in GetNextWorkRequired
sr. member
Activity: 406
Merit: 257
October 03, 2011, 06:23:42 AM
#42
Meni Rosenfeld:
I never claimed otherwise, I only answered how I came to the conclusion that "naive algo with asymmetric limits is possibly disastrous".

Zibbo:
Sliding window is a interesting idea, you'd obviously have to reduce the per block adjustment factor and the limits to the 2016-th root to get the same overall adjustment speed as bitcoin, but at first look this should work without introducing new vulnerabilities and result in "smoother" adjustments (obv. not really *faster* if you use factors to end up with bitcoin-equiv adjustment rate, but less... well... blocky).

kano:
That's the whole point, the current network will happily accept chain-of-massive-number-of-low-diff-blocks over chain-of-less-harder-blocks as long as the sum of difficulty of the first is higher and it follows the "rules set in stone" (no invalid tx, generation amount <= calculated amount, difficulty == getNextDifficulty(prevblock), block nTime > median of prev 11 blocks, block nTime can't be more than 2 h in the future, ...).
Assuming someone forks the chain, creating a chain with more work than the real network over the same real time obviously requires the attacker(s) having more hashrate than the rest of the network... you know, kinda the definition of a 51% attack.
And as I already explained, creating such a low-difficulty chain starting at the same nTime as the real chain without having nTime of the forkchain blocks run off into the far future (as real clients wouldn't accept that, see above) *should* be impossible. But it isn't due to the off-by-1 in getNextDifficulty, am I talking Chinese here or something?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
October 03, 2011, 05:55:49 AM
#41
Yep, all the "fucking with timestamps" are "only" worsening the result of 51% attacks.
The problem is this changes the economic incentives of miners, as "defecting" is suddenly a lot more profitable...
.
.
.
Ah OK, your talking about getting the actual 50BTC per block also i.e. there's an actual direct gain of mining those low difficulty blocks as long as the network accepts the blocks and they are never rolled back (or you transfer them to some exchange and cash them out ... though that's pretty easy to trace as long as TradeHill or MtGox is willing to help)

But as stated further up (and I was going to post when he posted) there is the issue of the network accepting these low difficulty blocks.
I'd presume (without delving into the code) that it wouldn't accept them ...
Meh I'll look ... main.cpp ... yep GetNextWorkRequired is part of ProcessBlock/AcceptBlock/CheckBlock and it calculates the difficulty so the low difficulty blocks would be rejected anyway by the standard client
(and I just noticed it does the checkpoint there so you can't fork below the last checkpoint)
newbie
Activity: 59
Merit: 0
October 03, 2011, 05:13:00 AM
#40
Wouldn't it be easiest to keep the original retarget algo (with 2016 block window etc), but just do the retargetting more often so that retarget windows overlap? You could retarget every block if you wanted (or every 10 or whatever).

With system like this, for example Namecoin difficulty would have dropped to "profitable" levels a long long time ago, and it would have suffered only a temporary slowdown.

Is there a downside to a system like this that I don't see? Some problem it doesn't solve or some exploit it introduces? It seems so simple :-)
donator
Activity: 2058
Merit: 1054
October 03, 2011, 04:46:45 AM
#39
Solidcoin/ixcoin/i0coin/... with *1.1 /4 limits (edit: in theory, as the real chains all also have the same off-by-1 in their retargeting):
N = 1 cooperate, everyone gets 1.
N > 0.5 cooperate, cooperators get 1/N, defectors get 0.
N < 0.5 cooperate, cooperators get 0, defectors get 3.6/(1-N).
N = 0 cooperate, everyone gets 3.6.
Can't this kind of problem be solved with a term corresponding to the I in PID? If more blocks than normal are found consistently, this will kick in and increase the difficulty.

Also, the suggestion in the OP isn't to change the limits asymmetrically. It's to expedite the adjustment when the recent history indicates the next adjustment is expected too far in the future. If the hashrate goes backup, the difficulty can rapidly increase just fine. Maybe even make it so that in this scenario the difficulty can even more easily go up, to further decrease any gameability.

All I'm saying is that just because some alts came up with a half-assed algo, doesn't mean the doomsday problem is unsolvable.


Under the “bitcoin as planned” alternative there is no incentive either way, so fix the off-by-1 and make sure adjustments are +/- the same maximum ("symmetric" by everyone but kjj's definition) and we're done, right? In other words, you need the off-by-1 or asymmetry for this to be exploitable, right?
Done with ArtForz's attack, yes. But not with the problem of the OP.

Also, once new coins are not being generated, the issue goes away as well, right? It doesn't matter how many blocks you generate if the entire subsidy is coming from transaction fees (a blocknumber based demurrage would still suffer the same problem, however).
I believe so. But the dynamics of that era are still not understood well enough even without considering potential attacks.
legendary
Activity: 905
Merit: 1011
October 03, 2011, 04:23:16 AM
#38
Under the “bitcoin as planned” alternative there is no incentive either way, so fix the off-by-1 and make sure adjustments are +/- the same maximum ("symmetric" by everyone but kjj's definition) and we're done, right? In other words, you need the off-by-1 or asymmetry for this to be exploitable, right?

Also, once new coins are not being generated, the issue goes away as well, right? It doesn't matter how many blocks you generate if the entire subsidy is coming from transaction fees (a blocknumber based demurrage would still suffer the same problem, however).
Pages:
Jump to: