Pages:
Author

Topic: Kimoto Gravity Well simplier alternative (Read 5466 times)

full member
Activity: 140
Merit: 100
May 14, 2014, 03:59:30 AM
#42
From what it appears, the Dark Gravity Wave algorithm calculates an average difficulty and an average time between blocks for the previous 14 blocks.  It also performs a sum of the time for the last 140 blocks.  Once out of the loop, another average is taken that combines the previously found average and the found sum.  An actual time span is found from this number and the target spacing, and in the end, the difficulty is ultimately allowed to adjust 3X in either direction.  Please correct me if I am wrong on any of this.

Due to exploits found in KGW, I've implemented a new difficulty retargeting algorithm called DarkGravityWave.

What is DarkGravityWave? It uses multiple exponential moving averages and a simple moving average to smoothly adjust the difficulty. This implementation is far more simplistic and better suited to adjust difficulty than KGW and also fixes all known exploits.

Here's the commit if you're interested:

https://github.com/evan82/darkcoin/commit/07c99052edc617975cdcbe4482e02c52e2d1fbf5

But the problem is that KGW retargets every block, but allows timestamps backwards over more than 1 block. I don't see how this is fixed in DGW, can you explain?
I was looking at this too because I didn't see anything to deal with the really long block separation exploit, but then noticed he put a box around it at the end so it will only allow up to 3x adjustment in either direction.

I am genuinely curious as to how you have fixed the time warp exploit problem because I think that your algorithm is still susceptible.  Even though you don't include a negative time between blocks into your sum, you still count the block.  Further below this affects your 'smart average' slightly.  I don't think that this is the main issue, however.  With the Dark Gravity Wave I am still able to jump forward 5 blocks and then back in time 1 block.  I don't violate any of the timestamp rules by doing this, and I am able to achieve an actual timespan that is larger than the target timespan, thus lowering the difficulty.

Edit: I think you may have done yourself a disfavor by not including the negative time between blocks.  Doing so would bring down your average block spacing and calculated sum.  This drops your actual timespan and in essence punishes an attacker for jumping back in time.  I'm not saying this would fix anything.  Am I missing something here?


The simplest way to protect against the time warp exploit, in all its forms, is with a simple rule about blocks whose timestamp is before the previous block.

Just make an adjustment to the difficulty baseline so they don't get the benefit of difficulty adjustments made after their own timestamp.  

In other words,  if a new block is timestamped prior to the previous four, you proceed as if the base difficulty were the same as it was prior to those four blocks.

poof, every time warp exploit in the world becomes impossible.

I'm not sure, if this is enought. Hasn't he got those 4 blocks cheaper? He still can go far to the future and stay there generating cheap blocks, waiting real chain to catch in time.

There is a rule in DGW making it so it won't include the timestamp if that is the case, plus it uses a different strategy that is far more simplistic. The KGW exploit was done against Darkcoin in the mainchain, so I ran it against DGW and it seemed to work just fine.

A limit on the difficulty adjustment does not hinder me from exploiting the timestamps.  Remember, if I am working with my own private chain, I have all the time in the world to build a chain that is longer, abides by all the rules, but has a lower difficulty; the exploit gets easier as the difficulty gets lower.  

Could we get BCX in here to confirm that this is still exploitable?
Does it mean Dark Gravity Well still doesn't fixed anything ?
sr. member
Activity: 477
Merit: 500
Bump. Please move here comments you wrote at the other thread.
member
Activity: 98
Merit: 10
I'm been refining the KGM code, but bugs concern me. I will definitely test as soon as I can, but really need a beta test to know for sure.
Fdt
full member
Activity: 137
Merit: 100
Forexcoin Developer
Yes its a good idea. I am taking it easy now however soon can test the new code Smiley
full member
Activity: 210
Merit: 100
You should test it out on a coin.
mkz
newbie
Activity: 14
Merit: 0
I don't think I get why the limit is at 2h either.  Generating blocks is probabilistic and does sometimes take that long, but that's difference from the current time, not difference from the last block. 

1h is needed to prevent a significant part of the network from dropping during DST adjustments.  Yes, there are still some systems that don't have a stable internal clock set to GMT, and those are sensitive to local-time resets which are different from country to country, sometimes automated and sometimes not, often forgotten about, sometimes both automated and done by hand, sometimes done the wrong direction out of confusion, etc.  The DST changeovers are an old network headache, but less and less important as systems standardize more on internal GMT clocks.  It may be time to just drop that hour as having been a bad idea in the first place, but I'm pretty sure that's why at least one hour is there.

It's probably worth adding another 15m or so for people who just have their clocks set wrong.  But that gets us 75 minutes, not 120.  Why the extra 45?



This should only be issue for miners anyway. They need to accept new block in order to add new blocks after it. If non-miner rejects a block because it's too far in the future, they should accept it later anyway so there's not much harm done. Mining is pretty centralized by mining pools nowdays, so I'd expect there to be multiple paths with reasonably correct GMT clocks that connect most of the mining network.
legendary
Activity: 924
Merit: 1132
I don't think I get why the limit is at 2h either.  Generating blocks is probabilistic and does sometimes take that long, but that's difference from the current time, not difference from the last block. 

1h is needed to prevent a significant part of the network from dropping during DST adjustments.  Yes, there are still some systems that don't have a stable internal clock set to GMT, and those are sensitive to local-time resets which are different from country to country, sometimes automated and sometimes not, often forgotten about, sometimes both automated and done by hand, sometimes done the wrong direction out of confusion, etc.  The DST changeovers are an old network headache, but less and less important as systems standardize more on internal GMT clocks.  It may be time to just drop that hour as having been a bad idea in the first place, but I'm pretty sure that's why at least one hour is there.

It's probably worth adding another 15m or so for people who just have their clocks set wrong.  But that gets us 75 minutes, not 120.  Why the extra 45?

mkz
newbie
Activity: 14
Merit: 0
- take the last 1.5*N block interval values (the difference between the time stamps of adjacent blocks)
- store the interval values in a double linked list

- go through this list and remove the highest 0.25*N and the lowest 0.25*N interval values (including negative values) which will remove noise and correct time travel intervals (both the before and after intervals).


I think this is the most important point in this algorithm. Bad timestamps provide no information about the block-rate and should be ignored. This doesn’t solve the problem completely, but hopefully the difficulty is less wrong because we don’t make big changes based on bad data. Ideal solution would be a system where timestamps can be trusted.

In general we should be more clear to divide the problem in two parts: estimating the current block-rate and calculating the difficulty adjustment based on estimated rate. Control part is not that difficult and simple proportional controller seems to be good enough for lot of cases. The difficulty comes from the fact that block timestamps that cannot be trusted are used as input — it’s garbage-in-garbage-out. I usually work with embedded systems and detecting broken sensors there is pretty straightforward. Here the problem is more difficult because the miners have incentive to lie about the timestamps.

There is some outlier rejection in AcceptBlock() (timestamp has to be between median of last 11 blocks and 2h in the future) and it could be made stronger. I don’t understand why the limit is at 2h even for Bitcoin, never mind for the faster altcoins. Extra rejection and smoothing can be done by pre-prosessing the block timestamps before feeding the blocks to difficulty controller (like in Cor2's algorithm before), so that we can still accept blocks even though we don’t trust their timestamps enough to base the difficulty adjustment on them.
sr. member
Activity: 477
Merit: 500
Plain undo is not enought. When 'undoing', the amount of generated blocks should be counted in. If you jump over 5 blocks, you should get a lot bigger jump in diff than if you jump back only 1 block, even if they jump on to the same time.


Nope.  I claim that if you're jumping back to a time before the last five blocks, you should increase  difficulty by exactly as much as the difficulty has lowered in the last 5 blocks.  Well, plus whatever upward diff adjustment you get for the minimum-time (actually negative-time) block you're turning in.  

The point is that nobody should be able to make long-term changes to the difficulty (especially nobody should be able to lower the difficulty) by going backward and forward in timestamps.  The way to do it is to undo whatever "progress" they've made toward changing the difficulty whenever they jump backward.  

While it's true that jumping back over more blocks means potentially undoing more adjustments and therefore getting a much bigger total change with your back-timestamped block, it won't always be the case that the last N blocks have all been lowering the difficulty - and making a big adjustment anyway would unduly increase the difficulty.  

If we compare two chains:
1) Chain that have 5 blocks inside one minite, timestamps 00.10, 00.20, 00.30, 00.40, 0.50
2) Chain that have 5 blocks inside one minute, timestamps 00.10, 01.00, 02.00, 03.00, 0.50

They both should have the same total diff.. plain undo won't do that. Chain 2 could be generated with a lot lesser hash power, if just latest block diff is made to be the same than in chain 1.

edit: btw, you will generate the 5th block with the difficulty of the 4th block timstamp..
Fdt
full member
Activity: 137
Merit: 100
Forexcoin Developer
Interesting thread.

I am a bit sleepy however later I can test some scenarios on the test servers.

Also I am offering 300,000 FRX (provided code is really as efficient as it claims to be) as contribution to the new difficulty adjustment algorithm  pot.

legendary
Activity: 924
Merit: 1132
Hell, maybe the right thing isn't looking at "intervals" as such at all.  Maybe the right thing is looking at the total number of blocks in the blockchain that have timestamps within, say, an hour of the most recent block's time, completely ignoring whether individual blocks are backward, forward, long, short, or whatever.  If there are a lot, it's time to raise difficulty.  If there aren't, it's time to lower difficulty.

It has at least a couple of "consistency" properties I like, anyway.  No matter how anybody jumps back-and-forth in time, if you build up a lot of blocks that have similar timestamps you start getting seriously increased difficulty until you get completely away from that time. 

hero member
Activity: 595
Merit: 500
Alternatively, to artificially lower diff, issue time travel block(s) with increased time since last block(s) to trick the re-target algo into thinking the block finding time is too long.
That's a contradiction in terms.  Your time-travel block cannot have an increased time since last block.  If it did then it would not be a time travel block.  You raise the difficulty the maximum amount by turning in a block shorter than x, and negative time is always shorter than positive x. So the time travel block will always incur a maximum upward difficulty adjustment.  
I disagree, when submitting blocks with future time stamp that is just as much time travel as a block with a time stamp in the past. Both cause problems with adjacent blocks having a negative time interval. Still a future time stamp will cause the re-target to think that more time has lapsed than the actual block finding time, so it will try to make the diff lower than it should. That is - until the next block with correct time stamp is submitted.
hero member
Activity: 595
Merit: 500
Here is a proposal for a simple algorithm as alternative to KGW, that should be fairly resistant to time-travel exploits,
while producing a stable and fast-re-targeting diff.
The assumption is that most block finders are honest (with their time stamp) as a majority-attack can always be designed to wreak havoc,
the issue here is to avoid an easy exploitation of the re-target by making it as redundant as possible while still maintaining agility:
 (The integer N should be a multiple of 4, for example 32)

- take the last 1.5*N block interval values (the difference between the time stamps of adjacent blocks)
- store the interval values in a double linked list

- go through this list and remove the highest 0.25*N and the lowest 0.25*N interval values (including negative values) which will remove noise and correct time travel intervals (both the before and after intervals).

- calculate the average of the remaining N block intervals (add all intervals and divide by N)
- calculate the average of the most recent 0.5*N block intervals
- calculate the average of the most recent 0.25*N block intervals
- add the first two averages, divide by 2, add the last average to it and again divide by 2
This is now a nice number for the actual block finding time that is weighed towards the later block intervals and filtered from noise and time travel events. The diff can now be lowered/increased based directly on the ratio between this avg block finding time and the designed block time.
The response time for new intervals (if they are not caught by the filter) is quick, since they count heavily (4x more than intervals older than 1/2 N).
If network hash rate would suddenly change dramatically, it might take up to 1/4 N blocks before the diff responds since the new values are initially filtered out. If N is 32 for example and the normal block finding time is 1 minute, then a dramatic change of an order of magnitude change would cause either (for increased hash rate) more than 8 blocks found within 1 minute before the diff will quickly start rising, so within 2 minutes the block finding rate will seriously slow down again for a 10x increase in network hash rate. When the network suddenly loses 90% of hash rate, the avg block find time goes to 10 min and about 1.5 hour goes by before the filter will let the extreme large intervals show up, but then the diff will quickly lower and the block find time return to normal.
Time travel effects are essentially eliminated: every single time travel event (up to 8 in a 48 block interval when N is 32) will show a very low or negative interval and a very high interval which are both filtered out.
This simple algo does not protect against an attacker that manages to inject a continuous sequence of blocks showing large or very small time increments, but the assumption is that the majority of the network is submitting blocks with correct time stamp.
It definitely will protect against the random jumps in diff due to looking only at the last 1 or 2 blocks
Also the algo is so simple that it hardly takes any processing time and very little memory.
What do you think? Let me know.
legendary
Activity: 924
Merit: 1132

Alternatively, to artificially lower diff, issue time travel block(s) with increased time since last block(s) to trick the re-target algo into thinking the block finding time is too long.

That's a contradiction in terms.  Your time-travel block cannot have an increased time since last block.  If it did then it would not be a time travel block.  You raise the difficulty the maximum amount by turning in a block shorter than x, and negative time is always shorter than positive x. So the time travel block will always incur a maximum upward difficulty adjustment. 

hero member
Activity: 595
Merit: 500
The second fix is even simpler: when you jump back in time, always undo whatever adjustment to difficulty came with any blocks you jump back over. 
Plain undo is not enought. When 'undoing', the amount of generated blocks should be counted in. If you jump over 5 blocks, you should get a lot bigger jump in diff than if you jump back only 1 block, even if they jump on to the same time.
Nope.  I claim that if you're jumping back to a time before the last five blocks, you should increase  difficulty by exactly as much as the difficulty has lowered in the last 5 blocks.  Well, plus whatever upward diff adjustment you get for the minimum-time (actually negative-time) block you're turning in. 

The point is that nobody should be able to make long-term changes to the difficulty (especially nobody should be able to lower the difficulty) by going backward and forward in timestamps.  The way to do it is to undo whatever "progress" they've made toward changing the difficulty whenever they jump backward. 

While it's true that jumping back over more blocks means potentially undoing more adjustments and therefore getting a much bigger total change with your back-timestamped block, it won't always be the case that the last N blocks have all been lowering the difficulty - and making a big adjustment anyway would unduly increase the difficulty. 
I fear this opens up another possibility for exploitation: select the past block sequence that you would like the re-target algo to trip over and then send a time-travel block that will position you exactly at that sequence (plus one more block now) This could easily be used to explode a diff to very high numbers by searching for (or creating) an accidental sequence of very short time intervals, positioning the re-target right at that sequence with yet another short interval by time travel and if the re-target algo looks at just those blocks (for fast re-target = short range of look back) then it will spit out an extremely high diff...
Alternatively, to artificially lower diff, issue time travel block(s) with increased time since last block(s) to trick the re-target algo into thinking the block finding time is too long.
I am trying to define a simple algorithm to get rid of most these exploitations, give a fast re-target but also avoid constant diff swings that I see from algo's that only look at a few of the last blocks. Probably a weighed average, I'll put it in a next post. Feel free to comment/correct and especially point out any weakness or exploitation risks.
legendary
Activity: 924
Merit: 1132

The second fix is even simpler: when you jump back in time, always undo whatever adjustment to difficulty came with any blocks you jump back over. 

Plain undo is not enought. When 'undoing', the amount of generated blocks should be counted in. If you jump over 5 blocks, you should get a lot bigger jump in diff than if you jump back only 1 block, even if they jump on to the same time.


Nope.  I claim that if you're jumping back to a time before the last five blocks, you should increase  difficulty by exactly as much as the difficulty has lowered in the last 5 blocks.  Well, plus whatever upward diff adjustment you get for the minimum-time (actually negative-time) block you're turning in. 

The point is that nobody should be able to make long-term changes to the difficulty (especially nobody should be able to lower the difficulty) by going backward and forward in timestamps.  The way to do it is to undo whatever "progress" they've made toward changing the difficulty whenever they jump backward. 

While it's true that jumping back over more blocks means potentially undoing more adjustments and therefore getting a much bigger total change with your back-timestamped block, it won't always be the case that the last N blocks have all been lowering the difficulty - and making a big adjustment anyway would unduly increase the difficulty. 
sr. member
Activity: 477
Merit: 500

The second fix is even simpler: when you jump back in time, always undo whatever adjustment to difficulty came with any blocks you jump back over. 

Plain undo is not enought. When 'undoing', the amount of generated blocks should be counted in. If you jump over 5 blocks, you should get a lot bigger jump in diff than if you jump back only 1 block, even if they jump on to the same time.

But this might lead to the correct fix; if you have to be able to retarget fast, it should be mainly based on counting time backwards while also counting the blocks which are inside the time window. Ie: loop back blocks until (head time - block time) > n seconds. Then retarget based on time/block count.
Time window could be set so usually you loop only 1 block.

But limiting the max change actually might cause the exploit to be easier. Retarget to the limit when going forward and go way over the limit when going backward -> you get maximum benefit and your punishment is limited.
legendary
Activity: 924
Merit: 1132

There are two simple fixes:  First, update the difficulty only seldom (as Bitcoin does), at intervals such that there is no hope of jumping back across more than one difficulty adjustment nor influencing one that you do cross very much when it's recalculated on crossing the critical block height again.

But given the objective of updating difficulty rapidly enough to be resistant to "raider" mining, you don't want to do that. 

The second fix is even simpler: when you jump back in time, always undo whatever adjustment to difficulty came with any blocks you jump back over. 

Either of these fixes kill time warp exploits dead.
member
Activity: 70
Merit: 10
Crypto Craftsman
Due to exploits found in KGW, I've implemented a new difficulty retargeting algorithm called DarkGravityWave.

What is DarkGravityWave? It uses multiple exponential moving averages and a simple moving average to smoothly adjust the difficulty. This implementation is far more simplistic and better suited to adjust difficulty than KGW and also fixes all known exploits.

Here's the commit if you're interested:

https://github.com/evan82/darkcoin/commit/07c99052edc617975cdcbe4482e02c52e2d1fbf5

But the problem is that KGW retargets every block, but allows timestamps backwards over more than 1 block. I don't see how this is fixed in DGW, can you explain?
I was looking at this too because I didn't see anything to deal with the really long block separation exploit, but then noticed he put a box around it at the end so it will only allow up to 3x adjustment in either direction.
sr. member
Activity: 350
Merit: 250
Independent Cryptoveloper
I saw the DGW code and it appears that it would run a bit faster in Java than the KGW because it doesn't use floating point math. 

Regarding exploits with KGW.  Is the time warp exploit more likely to work on a coin that sets the KGW to calculate over a smaller range of blocks or a longer range?  Megacoin for instance does its calculations on a number of blocks between 144 and 4032 (theoretically).  Other coins are using smaller ranges, such as 28 to about 240. 
Pages:
Jump to: