We are officially naming our next update:
DigiByte v 2.0.0-DigiShieldEach successive major release of DigiByte will be accompanied with a "Digi" name.
The DigiShield update will serve two main purposes: To shield from multi-pools and an over inflation of new coins (0.5% weekly reward reduction). We are testing a few more configurations before we release the update. We are pushing back the block where the changes will kick in to next Friday.
After testing multiple configurations (30+) of our own code, the Kimoto Gravity Well and Earthcoins one minute block configuration we have learned a few important lessons over the last week.
First, it is essential that a re-target occur with every block. There is just no other way to do it. Second, it is not possible to allow for a faster re-target than 60 seconds as multiple errors occur and the client crashes while mining (with 60 second block spacing). Third, even the Kimoto Gravity Well allows for a fair amount of "insta" mining following a major hash increase. So does the Earthcoin approach. Both are not very effective and will still allow multi-pools a few minutes of mining and are not all they are hyped up to be. Earthcoin still gets hit by multi-pools for 11-12 minutes at a time. Same will happen to DigiByte even with a KMG implementation.
The truth is both those approaches limit the amount the difficulty is allowed to change each time. This becomes an even bigger issue with a sudden hash decrease. While simulating a sudden 20-40 fold hash decrease the Gravity well can become "stuck" for a few hours before evening out again. Since it takes several blocks for the hash to come back down it really adds up when it takes 20-30 minutes to discover each block for awhile. Same with the Earthcoin approach, this is why they get "stuck" for 20-40 minutes following an 11 minute hash increase from Hash cow.
We know a 20-40 fold hash increase is not usual, but it could happen. More than likely we will only see a 5-10 fold increase. Non the less, we want to make sure we can handle sudden extremes very quickly.
With our own custom implementation we have tested many different variations on how far the difficulty is allowed to jump each block. The more we allow the difficulty to jump, the faster it adapts. Pretty much every scrypt based coin out there only allows a jump by a factor of 4 within a block retarget. This is most likely because that is what Litecoin implemented. This makes sense with a multi day difficulty re-target (Litecoin 3.5 days), as anything more could kill a coin as a dramatic hash increase could push the difficulty so high it took weeks for the next re-target to occur for a new or smaller coin with limited hash.
Litecoin limit code:
// Maximum 400% adjustment...
bnResult *= 4;
We feel pushing this up from a factor of 4 to a factor of 200-2000 is the way to go. This allows for very dramatic adjustments with every block which means very quick adjustments to hash movements. This approach has far out performed the Gravity Well or Earthcoin code. There are a few other settings that play into the difficulty adjustment process, but we have seen the most success moving this # upwards in conjunction with a few other settings.
We have made instant hash simulations going form 200-8000 kh and then vice versa. We have also tested 200 - 4000, 200 -2000 and a few others. As expected the 200 - 8000 kh swings (40 x) are the most dramatic and cause most configurations to be "stuck" for an hour or more (KMG included).
KMG works fine for smaller adjustments every block, but offers no additional benefit with major hash swings. It basically "breaks" with very large hash swings.
Our delay right now is deciding exactly how much we should allow the difficulty to adjust with each block. The performance from our tests indicates the equivalent of 200x allows for the fastest adjustments up and down within a reasonable amount of time.
// Maximum 400% adjustment...
bnResult *= 200;
Is there something we are over looking here? With a much higher (actual) hash load will this adjustment act differently? What are the dangers of allowing very large diff swings like this?
Also, with a higher difficulty # like the 10-20 range we currently see, will we see any other phenomena that we can't test in the low hash test-net environment? Are we on the right path?