Okay.. And it is increased 25% then? No other changes are made?
The charts I provided were for 20%. 25% is really aggressive, but it works as well. That is the only real change made besides the retarget time variables.
Forks aren't caused by the diff readjustment algorithm and so they're currently out of scope for the simulator, and are not part of the difficulty re-adjustment problem at all.
The chances of a 1MH miner having three lucky blocks are a lot larger then the chances for a 600GH miner having three lucky blocks.
No, forks aren't caused by the algo, but a fork instantly alters the network hashrate when it splits and then rejoins. With enough hashrate you can fork anything. That's something the simulator won't account for that a testnet can.
I totally agree that having more adjustments (smaller block times) will give the diff re-adjustment more opportunities to re-adjust, and so in theory it will work better.. But this is not something the algorithm itself should be aware of. Also, let's not forget that smaller blocktimes introduces a new problem: more chance of chain splits. (network must converge within the blocktime, if it doesn't, a split might occur.. if the split isn't handled well by the network, it becomes permanent and we're stuck with a fork..)
Correction... the algorithm is aware of the percentage of time elapsed between blocks. That's how DIGI calculates the next difficulty. Go over 150 seconds, the difficulty shifts down. Go under, and the difficulty shoots up. All proportionate to the percentage of time. Time is a huge factor in our issue here. One that has been discussed to the point of completely ignoring blocks under a certain time. By reducing the total block time, you allow for smoother adjustments that happen more often, and DIGI eats that up like roast beast at a Whoville feast.
I know DGW3 doesn't handle the current spikes very well.. But what is this "scaling" factor? And why would Digi handle it well? Is there an example of a coin with Digi that has x30 hashrate increases (e.g.: clevermining) at the same level as Guldencoin currently is?
By scaling, we mean being able to increase in hashrate without issue. Whether we're at 10GH total network hashrate or 10PH. However, once we get past a certain point, the event horizon of sorts, you no longer need a readjustment algo like DIGI or DGW3. You just implement the stock algo and let it ride out until the end.
As far as examples go- any coin, pre-hashlets, that implemented DIGI. I'm sure I could track down a list, but I know POT for instance did it. They shot themselves in the foot with a flawed block halving scheme, but their network hashrate increased substantially after their DIGI implementation, and the multipools dropped off almost completely. There are others that had similar results... just search the forums.
The code you highlighted is not what I based our code on. This is the DIGI function, as it was on our testnet:
unsigned int static GetNextWorkRequired_DIGI(const CBlockIndex* pindexLast, const CBlockHeader *pblock){
unsigned int nProofOfWorkLimit = bnProofOfWorkLimit.GetCompact();
int nHeight = pindexLast->nHeight + 1;
int64 retargetTimespan = nTargetTimespan;
int64 retargetSpacing = nTargetSpacing;
int64 retargetInterval = nInterval;
retargetInterval = nTargetTimespanNEW / nTargetSpacing;
retargetTimespan = nTargetTimespanNEW;
// Genesis block
if (pindexLast == NULL)
return nProofOfWorkLimit;
// Only change once per interval
if ((pindexLast->nHeight+1) % retargetInterval != 0)
{
// Special difficulty rule for testnet:
if (fTestNet)
{
// If the new block's timestamp is more than 2* nTargetSpacing minutes
// then allow mining of a min-difficulty block.
if (pblock->nTime > pindexLast->nTime + retargetSpacing*3)
return nProofOfWorkLimit;
else
{
// Return the last non-special-min-difficulty-rules-block
const CBlockIndex* pindex = pindexLast;
while (pindex->pprev && pindex->nHeight % retargetInterval != 0 && pindex->nBits == nProofOfWorkLimit)
pindex = pindex->pprev;
return pindex->nBits;
}
}
return pindexLast->nBits;
}
// Dogecoin: This fixes an issue where a 51% attack can change difficulty at will.
// Go back the full period unless it's the first retarget after genesis. Code courtesy of Art Forz
int blockstogoback = retargetInterval-1;
if ((pindexLast->nHeight+1) != retargetInterval)
blockstogoback = retargetInterval;
// Go back by what we want to be 14 days worth of blocks
const CBlockIndex* pindexFirst = pindexLast;
for (int i = 0; pindexFirst && i < blockstogoback; i++)
pindexFirst = pindexFirst->pprev;
assert(pindexFirst);
// Limit adjustment step
int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
printf(" nActualTimespan = %"PRI64d" before bounds\n", nActualTimespan);
CBigNum bnNew;
bnNew.SetCompact(pindexLast->nBits);
//DigiShield implementation - thanks to RealSolid & WDC for this code
// amplitude filter - thanks to daft27 for this code
nActualTimespan = retargetTimespan + (nActualTimespan - retargetTimespan)/8;
printf("DIGISHIELD RETARGET\n");
if (nActualTimespan < (retargetTimespan - (retargetTimespan/4)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/4));
if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));
// Retarget
bnNew *= nActualTimespan;
bnNew /= retargetTimespan;
if (bnNew > bnProofOfWorkLimit)
bnNew = bnProofOfWorkLimit;
/// debug print
printf("GetNextWorkRequired RETARGET\n");
printf("nTargetTimespan = %"PRI64d" nActualTimespan = %"PRI64d"\n" , retargetTimespan, nActualTimespan);
printf("Before: %08x %s\n", pindexLast->nBits, CBigNum().SetCompact(pindexLast->nBits).getuint256().ToString().c_str());
printf("After: %08x %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());
return bnNew.GetCompact();
}
You need this:
static const int64 nTargetTimespan = 3.5 * 24 * 60 * 60; // Guldencoin: 3.5 days
static const int64 nTargetTimespanNEW = 2.5 * 60;
static const int64 nTargetSpacing = 2.5 * 60; // Guldencoin: 2.5 minutes
static const int64 nInterval = nTargetTimespan / nTargetSpacing;
And you change this to set the max change:
bnResult = (bnResult * 120) / 100;
That's for 20%. Replace the 120 with 125 for 25%.
It's pretty straight forward. Compared to the original algo for LTC, and the GetNextWorkRequired_original function, you can see why I would say it's only 1-2 lines of additional code.
With the current network stability (which isn't very high) I wouldn't recommend lower blocktimes.. Furthermore, it simply doesn't work because although lower blocktimes means more blocks and thus more re-adjustments.. it's the same calculations being done... say we would half the block time to 75 seconds, that means the difficulty would also be halved so we would get twice the number of blocks within the same amount of time. The income/costs for dedicated miners wouldn't change, they would just get twice the blocks with each half the reward per block. This also applies for clever, they would have to spend the same amount of hashingpower to get twice the number of blocks at half the reward per block.. so in the end it's just the same thing.. The only thing that changes is that if the dgw3 or digi impact isn't halved too: it will cause twice the size in blocktime spikes (both up and down).. So you'd have to half the dgw3/digi impact.. which means that in the end, you're left with 0 effect (apart from a network that's more vulnerable to time warp attacks and chain splits). The faster reaction you're talking about would work if the hashrate changes over a number of blocks' time. But with a jump pool, it's instantanious. If I'm missing something, please explain.
DIGI accounts for time, so it scales with the block time. The difficulty doesn't change because you halve the block time. DIGI says this is what the time should be, and what percentage of that time was the last block. If you go over the time, the difficulty goes down, if you go under the difficulty goes up. In our current code, you would be correct with this. I'm not talking about our current code though. I'm talking about the code that will fix our situation. Halving the block time isn't a necessity. Keep it where it is and use a higher max adjustment. The selling point here is that with a shorter block time, you can reduce the max change, and get more adjustments per period of time. Again though, not a necessity. Just an added step to making things better.
The fact that digi doesn't take into account the false lows /does/ sound good.. As that is actually the largest part of the problem we're seeing.
So, I don't really believe in the halved blocktimes, but investigating more into digi seems to be worth the time..
Before christmas I had the idea to write 'GDR': "Guldencoin Difficulty Readjustment". Maybe it's a good approach to take what we've learned so far, including DGW3 and Digi approaches, and work out an algorithm that is better. After all that is how DGW3 and Digi were born too: by applying lessons learned and incremental development.
I would be extremely happy to do this, but not alone. If anyone can provide me with a Digi ported to the simulator that would be great, it's pretty simple, but will take some time.
If the community doesn't want a new algorithm and places it's bet on Digi, then I WILL help with that. I'd be happy to work together with youguys to apply and deploy the software patch. Just as before I will provide compiled binaries for all major platforms and update the seeds and send a network alert. I believe last time that went pretty well, and this time we don't have to do a chain fork.
Please though, understand that we should not do this without being absolutely sure. There are always risks.
Cheers!
Please don't let the port to the simulator hold up the talks. While I am in agreement that the simulator should be used, I need to know if it is ready to take in to account everything needed for real-life results. If it isn't ready to simulate every possible variable, then we need to put it aside for these discussions. If that's not an option, we need to know when additional features will be implemented so we can press forward.
Additionally, the simulator needs to be verified against a chain. I would seriously suggest creating a post in the development sub-forum here to get it out to the public and let them develop and test it for you. There is no reason why it shouldn't be made public. It will strengthen development of the simulator and it will bring publicity to NLG.
I'll reiterate this point though-
You can't predict the future of mining, so you can't future-proof the algorithm. I've been in crypto for a little over 2 years now, active here for 90% of that time. Things change dramatically over small timeframes in crypto. People need to realize that no algorithm or solution will ever be the final answer. Coins need to adapt to the changes. It's evolution at a code level. It you don't adapt and overcome, you can't survive. But you also can't adapt to something that hasn't happened yet because the world can take the opposite turn, and then you'll be left exactly where we are right now.
Don't be afraid to make future changes. It's not a sign of weak coin planning, but rather a sign of active adaptation to an ever changing landscape.
-Fuse