Author

Topic: How much can difficulty be adjusted each block? ★ DigiByte v 2.0.0-DigiShield★ (Read 1169 times)

legendary
Activity: 1722
Merit: 1051
Official DigiByte Account

This looks reasonable now and will work under normal conditions, although an asymmetric algorithm is not recommended, it opens up some unfair mining practices.

If you look at the recent history of the Luckycoin and what the wafflepool.com did to it, you should realize that your 99% finished algorithm will not protect you from a similar event.
https://coinplorer.com/Charts/Difficulty/LKY

It doesn't matter how you choose these parameters, they will provide no effective protection.
Do some simulation what happens when a pool sustains 10GH/s+ for some hours on your coin.

Thank you for the feedback and the link! We will look into that for sure!
sr. member
Activity: 259
Merit: 260

We are leaving this code like this, as we see how it is used with checkpoints. It took us several runs to understand exactly what all was going on there.
Quote
// Maximum 400% adjustment...
        bnResult *= 4;

I always find it interesting that no one seems to understand what this code does. With the retargeting code below this leaves still a security hole open.


We discovered the optimum combination was to set a different limit for difficulty increases vs decreases. By doing this we are able to recover or adjust to a 8-10x increase or decrease in net hash within just a few blocks. By having the limits high for both increases and decreases we were able to adjust rapidly to hash swings, but an annoying "oscillation" of block timings emerged. Two fast, then two slow. To fix this we discovered by having lower limits on increases and higher limits on difficulty decreases the oscillation was mitigated and overall performance and re-targeting improved dramatically.

Quote
if (nActualTimespan < (retargetTimespan - (retargetTimespan/3)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/3));
if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));

We are now running tests on the network for a few hours to get a running block average and we will also simulate a few more hash swings. We are 99% finished though!  With DigiShield we are offering a solution to the re-target problem never seen before in a coin!

This looks reasonable now and will work under normal conditions, although an asymmetric algorithm is not recommended, it opens up some unfair mining practices.

If you look at the recent history of the Luckycoin and what the wafflepool.com did to it, you should realize that your 99% finished algorithm will not protect you from a similar event.
https://coinplorer.com/Charts/Difficulty/LKY

It doesn't matter how you choose these parameters, they will provide no effective protection.
Do some simulation what happens when a pool sustains 10GH/s+ for some hours on your coin.
legendary
Activity: 1722
Merit: 1051
Official DigiByte Account
Our delay right now is deciding exactly how much we should allow the difficulty to adjust with each block. The performance from our tests indicates the equivalent of 200x allows for the fastest adjustments up and down within a reasonable amount of time.
Quote
// Maximum 400% adjustment...
        bnResult *= 200;

Is there something we are over looking here? With a much higher (actual) hash load will this adjustment act differently? What are the dangers of allowing very large diff swings like this?

Also, with a higher difficulty # like the 10-20 range we currently see, will we see any other phenomena that we can't test in the low hash test-net environment? Are we on the right path?

I am getting the impression you don't really understand the retargeting code.
The only thing you achieve by changing the number in the quoted code is to open up a big security hole.
In case you also change some other parts of the code to match the above change, the resulting retargeting factor of 200 is insane.
You are just asking someone with an ASIC to send your coin from diff 10 to diff 10,000 and then you are stuck mining even a single block.

We are leaving this code like this, as we see how it is used with checkpoints. It took us several runs to understand exactly what all was going on there.
Quote
// Maximum 400% adjustment...
        bnResult *= 4;

We discovered the optimum combination was to set a different limit for difficulty increases vs decreases. By doing this we are able to recover or adjust to a 8-10x increase or decrease in net hash within just a few blocks. By having the limits high for both increases and decreases we were able to adjust rapidly to hash swings, but an annoying "oscillation" of block timings emerged. Two fast, then two slow. To fix this we discovered by having lower limits on increases and higher limits on difficulty decreases the oscillation was mitigated and overall performance and re-targeting improved dramatically.

Quote
if (nActualTimespan < (retargetTimespan - (retargetTimespan/3)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/3));
if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));

We are now running tests on the network for a few hours to get a running block average and we will also simulate a few more hash swings. We are 99% finished though!  With DigiShield we are offering a solution to the re-target problem never seen before in a coin!
sr. member
Activity: 259
Merit: 260
Our delay right now is deciding exactly how much we should allow the difficulty to adjust with each block. The performance from our tests indicates the equivalent of 200x allows for the fastest adjustments up and down within a reasonable amount of time.
Quote
// Maximum 400% adjustment...
        bnResult *= 200;

Is there something we are over looking here? With a much higher (actual) hash load will this adjustment act differently? What are the dangers of allowing very large diff swings like this?

Also, with a higher difficulty # like the 10-20 range we currently see, will we see any other phenomena that we can't test in the low hash test-net environment? Are we on the right path?

I am getting the impression you don't really understand the retargeting code.
The only thing you achieve by changing the number in the quoted code is to open up a big security hole.
In case you also change some other parts of the code to match the above change, the resulting retargeting factor of 200 is insane.
You are just asking someone with an ASIC to send your coin from diff 10 to diff 10,000 and then you are stuck mining even a single block.
legendary
Activity: 1722
Merit: 1051
Official DigiByte Account
legendary
Activity: 1722
Merit: 1051
Official DigiByte Account
We are officially naming our next update:DigiByte v 2.0.0-DigiShield

Each successive major release of DigiByte will be accompanied with a "Digi" name.

The DigiShield update will serve two main purposes: To shield from multi-pools and an over inflation of new coins (0.5% weekly reward reduction). We are testing a few more configurations before we release the update. We are pushing back the block where the changes will kick in to next Friday.

After testing multiple configurations (30+) of our own code, the Kimoto Gravity Well and Earthcoins one minute block configuration we have learned a few important lessons over the last week.

First, it is essential that a re-target occur with every block. There is just no other way to do it. Second, it is not possible to allow for a faster re-target than 60 seconds as multiple errors occur and the client crashes while mining (with 60 second block spacing). Third, even the Kimoto Gravity Well allows for a fair amount of "insta" mining following a major hash increase. So does the Earthcoin approach. Both are not very effective and will still allow multi-pools a few minutes of mining and are not all they are hyped up to be. Earthcoin still gets hit by multi-pools for 11-12 minutes at a time. Same will happen to DigiByte even with a KMG implementation.

The truth is both those approaches limit the amount the difficulty is allowed to change each time. This becomes an even bigger issue with a sudden hash decrease. While simulating a sudden 20-40 fold hash decrease the Gravity well can become "stuck" for a few hours before evening out again. Since it takes several blocks for the hash to come back down it really adds up when it takes 20-30 minutes to discover each block for awhile. Same with the Earthcoin approach, this is why they get "stuck" for 20-40 minutes following an 11 minute hash increase from Hash cow.

We know a 20-40 fold hash increase is not usual, but it could happen. More than likely we will only see a 5-10 fold increase. Non the less, we want to make sure we can handle sudden extremes very quickly.

With our own custom implementation we have tested many different variations on how far the difficulty is allowed to jump each block. The more we allow the difficulty to jump, the faster it adapts. Pretty much every scrypt based coin out there only allows a jump by a factor of 4 within a block retarget. This is most likely because that is what Litecoin implemented. This makes sense with a multi day difficulty re-target (Litecoin 3.5 days), as anything more could kill a coin as a dramatic hash increase could push the difficulty so high it took weeks for the next re-target to occur for a new or smaller coin with limited hash.

Litecoin limit code:
Quote
// Maximum 400% adjustment...
        bnResult *= 4;

We feel pushing this up from a factor of 4 to a factor of 200-2000 is the way to go. This allows for very dramatic adjustments with every block which means very quick adjustments to hash movements. This approach has far out performed the Gravity Well or Earthcoin code. There are a few other settings that play into the difficulty adjustment process, but we have seen the most success moving this # upwards in conjunction with a few other settings.

We have made instant hash simulations going form 200-8000 kh and then vice versa. We have also tested 200 - 4000, 200 -2000 and a few others. As expected the 200 - 8000 kh swings (40 x) are the most dramatic and cause most configurations to be "stuck" for an hour or more (KMG included).

KMG works fine for smaller adjustments every block, but offers no additional benefit with major hash swings. It basically "breaks" with very large hash swings.

Our delay right now is deciding exactly how much we should allow the difficulty to adjust with each block. The performance from our tests indicates the equivalent of 200x allows for the fastest adjustments up and down within a reasonable amount of time.
Quote
// Maximum 400% adjustment...
        bnResult *= 200;

Is there something we are over looking here? With a much higher (actual) hash load will this adjustment act differently? What are the dangers of allowing very large diff swings like this?

Also, with a higher difficulty # like the 10-20 range we currently see, will we see any other phenomena that we can't test in the low hash test-net environment? Are we on the right path?
Jump to: