Author

Topic: NA - page 407. (Read 893613 times)

sr. member
Activity: 672
Merit: 250
October 21, 2014, 01:30:24 PM
Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

It is 05:30 here and I am off to the markets... thus the short reply... while the above is a great idea... it is wide open to a time warp attack... it would leave the network vulnerable and very insecure... this is essentially proof of time... not proof of work.

I could destroy the NLG network with a only a small percentage of the nethash.
sr. member
Activity: 332
Merit: 250
October 21, 2014, 01:16:46 PM
Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse

Yep, Fuse is right. The problem with the current block is that we don't know whats going on till it's solved.

Imagine it this way; You want to do a job and I will hand you a calculation to solve. The difficulty of the calculation I give you is estimated so that it will take you 150 seconds. So you start working and when you're done and found a solution you shout it to the community. So between the time you asked for work and the moment you shout your result we don't have any contact. I don't know it takes you so long and you don't know what to do otther than soving the calculation. The only way to stop you in between is another person shouting the answer for this calculation so you know it's solved.

Sure there are ways to notify but as Fuse stated thats a huge undertaking. 
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 12:55:18 PM
I'm no coder, but I can read and understand it.  If the solution is a weighted average, the section of code we would need to adjust would be the following snippet:

Code:
   // loop over the past n blocks, where n == PastBlocksMax
    for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
        if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
        CountBlocks++;

        // Calculate average difficulty based on the blocks we iterate over in this for loop
        if(CountBlocks <= PastBlocksMin) {
            if (CountBlocks == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
            else { PastDifficultyAverage = ((PastDifficultyAveragePrev * CountBlocks)+(CBigNum().SetCompact(BlockReading->nBits))) / (CountBlocks+1); }
            PastDifficultyAveragePrev = PastDifficultyAverage;
        }

        // If this is the second iteration (LastBlockTime was set)
        if(LastBlockTime > 0){
            // Calculate time difference between previous block and current block
            int64 Diff = (LastBlockTime - BlockReading->GetBlockTime());
            // Increment the actual timespan
            nActualTimespan += Diff;
        }
        // Set LasBlockTime to the block time for the block in current iteration
        LastBlockTime = BlockReading->GetBlockTime();      

        if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
        BlockReading = BlockReading->pprev;
    }

Instead of cycling through all 24 blocks, we need to cycle through the first 6(or whatever number, lower being better in my mind), and then cycle through the next 18 and count those as 1 lesser weighted block.

I ran this theory through an excel spreadsheet, and the weighted average reacts faster and carries a slightly higher difficulty than a generic average.  Try it yourself and you'll see what I mean.

Of course, am I'm now starting to wonder if I'm even calculating the correct values, and whether or not I should be looking at time instead.  Can someone steer me in the right direction?

-Fuse
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 12:31:42 PM
Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.

I might not be 100% correct on this, but I don't think this would be possible.  The code works around submitted blocks.  So you need to have blocks created and submitted to the chain to have the numbers calculated against.  I might be possible if you had something like POS always generating blocks separately from POW, but that's a whole other can of worms.  Frankly, implementing something like that is a huge undertaking.

-Fuse
sr. member
Activity: 393
Merit: 250
October 21, 2014, 12:09:05 PM
I also think this is a good plan as a 24 average alone can never straighten out the big swings from a multi pool.
So in theory a  24 weighted average with the emphasis on the last 6 will do much better.

hero member
Activity: 938
Merit: 1000
@halofirebtc
October 21, 2014, 11:57:59 AM
Thinking a little outside the box here:
How about if the network senses an hour or however long has passed, then automatically drops the diff of the current block by voiding/canceling the current big diff block and releasing a new small diff block? Is there a reason the network can't cancel/re-submit the diff or the current block being mined? Just ideas to keep ppl thinking, food for thought, I don't know if it's even possible. Blockchain and code are capable of so many things, figured I'd ask.
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 11:52:37 AM
To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

+1 for the detailed info, mate.  I'm with you 100%.

I really do believe either looking at less blocks for the average, or creating a weighted average is the way to go.  Additionally, there needs to be a limit in the amount of difficulty increase/decrease so we're not throwing the difficulty all over the place.

Again- less extreme changes that happen more often.

-Fuse
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 11:45:36 AM
Fuse, are those numbers of 24 in comparison with the other digishield coins and their target blocktimes? We have a target blocktime of 2,5 minutes. Those numbers should match up with the target blocktime, I think. I don't know if it was originally 24. So, I let it to Geert-Johan to react further, because I am not a programmer.  

But wanted to react as it seems plausible what you say.

Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

That would be nice as well, if needed.

Digishield doesn't calculate the difficulty based on block averages, but rather the individual time between blocks.  At least that's the way I interpret the digishield code from POT:

Code:
unsigned int static GetNextWorkRequired_V3(const CBlockIndex* pindexLast, const CBlockHeader *pblock){
 
     unsigned int nProofOfWorkLimit = bnProofOfWorkLimit.GetCompact();
     int nHeight = pindexLast->nHeight + 1;
 
 
     int64 retargetTimespan = nTargetTimespan;
     int64 retargetSpacing = nTargetSpacing;
     int64 retargetInterval = nInterval;
 
     retargetInterval = nTargetTimespanNEW / nTargetSpacing;
     retargetTimespan = nTargetTimespanNEW;
 
     // Genesis block
     if (pindexLast == NULL)
         return nProofOfWorkLimit;
 
     // Only change once per interval
     if ((pindexLast->nHeight+1) % retargetInterval != 0)
     {
         // Special difficulty rule for testnet:
         if (fTestNet)
         {
             // If the new block's timestamp is more than 2* nTargetSpacing minutes
             // then allow mining of a min-difficulty block.
             if (pblock->nTime > pindexLast->nTime + retargetSpacing*2)
                 return nProofOfWorkLimit;
             else
             {
                 // Return the last non-special-min-difficulty-rules-block
                 const CBlockIndex* pindex = pindexLast;
                 while (pindex->pprev && pindex->nHeight % retargetInterval != 0 && pindex->nBits == nProofOfWorkLimit)
                     pindex = pindex->pprev;
                 return pindex->nBits;
             }
         }
 
         return pindexLast->nBits;
     }
 
     // Dogecoin: This fixes an issue where a 51% attack can change difficulty at will.
     // Go back the full period unless it's the first retarget after genesis. Code courtesy of Art Forz
     int blockstogoback = retargetInterval-1;
     if ((pindexLast->nHeight+1) != retargetInterval)
         blockstogoback = retargetInterval;
 
     // Go back by what we want to be 14 days worth of blocks
     const CBlockIndex* pindexFirst = pindexLast;
     for (int i = 0; pindexFirst && i < blockstogoback; i++)
         pindexFirst = pindexFirst->pprev;
     assert(pindexFirst);
 
     // Limit adjustment step
     int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
     printf(" nActualTimespan = %"PRI64d" before bounds\n", nActualTimespan);
 
     CBigNum bnNew;
     bnNew.SetCompact(pindexLast->nBits);
 
     //DigiShield implementation - thanks to RealSolid & WDC for this code
 // amplitude filter - thanks to daft27 for this code
     nActualTimespan = retargetTimespan + (nActualTimespan - retargetTimespan)/8;
     printf("DIGISHIELD RETARGET\n");
     if (nActualTimespan < (retargetTimespan - (retargetTimespan/4)) ) nActualTimespan = (retargetTimespan - (retargetTimespan/4));
     if (nActualTimespan > (retargetTimespan + (retargetTimespan/2)) ) nActualTimespan = (retargetTimespan + (retargetTimespan/2));
     // Retarget
 
     bnNew *= nActualTimespan;
     bnNew /= retargetTimespan;
 
     if (bnNew > bnProofOfWorkLimit)
         bnNew = bnProofOfWorkLimit;
 
     /// debug print
     printf("GetNextWorkRequired RETARGET\n");
     printf("nTargetTimespan = %"PRI64d" nActualTimespan = %"PRI64d"\n", retargetTimespan, nActualTimespan);
     printf("Before: %08x %s\n", pindexLast->nBits, CBigNum().SetCompact(pindexLast->nBits).getuint256().ToString().c_str());
     printf("After: %08x %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());
 
     return bnNew.GetCompact();
 
 
 }

To clarify, although I believed digishield was the solution we needed to go with originally, I stand by my opinion that we should try to tweak DGW3 to get the result we want.  I am not advocating for digishield at this time.  This may change in the future, but for now lets try to fix DGW3.

-Fuse
sr. member
Activity: 332
Merit: 250
October 21, 2014, 11:41:59 AM
To ilustrate the behaviour we experience look at the blocks below. At 137136 we had a diff of 412 and because the block preceding this took two hours the diff dropped to 118. But the max drop is 1/3 so it should have dropped to 138 and not 118. The dif lowers and despite blocktimes of tens of seconds it lowers and lowers. Till 137159 then the calculation says hoooo fellas this is too fast and it raises the diff from 29.8 to 162.1 not exactly the max three times that was in the DGW design... So why does is diff adjusted more than the DGW design? And second it can behave much better if it is a weighted average instead of a plain one.

 
137184 2014-10-20 19:37:56 1 1000 227.657 307183000 64.5077 4675.82 61.0856%
137183 2014-10-20 19:37:43 1 1000 188.93 307182000 64.5077 4675.82 61.0856%
137182 2014-10-20 19:37:14 6 45685.2034 431.768 307181000 64.5076 4675.82 61.0858%
137181 2014-10-20 19:01:13 1 1000 374.057 307180000 64.4828 4675.79 61.095%
137180 2014-10-20 19:00:42 2 1422.978 332.835 307179000 64.4827 4675.79 61.0951%
137179 2014-10-20 18:53:02 1 1000 302.237 307178000 64.4776 4675.79 61.097%
137178 2014-10-20 18:52:47 1 1000 276.263 307177000 64.4776 4675.79 61.0971%
137177 2014-10-20 18:52:02 1 1000 256.457 307176000 64.4773 4675.79 61.0973%
137176 2014-10-20 18:50:03 1 1000 240.951 307175000 64.4761 4675.78 61.0978%
137175 2014-10-20 18:49:57 3 1975.29793541 228.569 307174000 64.4763 4675.78 61.0978%
137174 2014-10-20 18:47:50 1 1000 218.365 307173000 64.475 4675.78 61.0984%
137173 2014-10-20 18:47:41 1 1000 210.006 307172000 64.4751 4675.78 61.0984%
137172 2014-10-20 18:47:39 1 1000 203.035 307171000 64.4753 4675.78 61.0984%
137171 2014-10-20 18:47:24 1 1000 197.242 307170000 64.4753 4675.78 61.0985%
137170 2014-10-20 18:47:19 1 1000 192.377 307169000 64.4755 4675.78 61.0985%
137169 2014-10-20 18:46:52 1 1000 188.239 307168000 64.4754 4675.78 61.0986%
137168 2014-10-20 18:46:48 1 1000 184.775 307167000 64.4756 4675.78 61.0986%
137167 2014-10-20 18:46:44 1 1000 181.831 307166000 64.4757 4675.78 61.0987%
137166 2014-10-20 18:46:41 1 1000 179.32 307165000 64.4759 4675.78 61.0987%
137165 2014-10-20 18:46:29 1 1000 177.128 307164000 64.476 4675.78 61.0987%
137164 2014-10-20 18:45:32 1 1000 175.297 307163000 64.4755 4675.78 61.099%
137163 2014-10-20 18:45:08 1 1000 173.454 307162000 64.4755 4675.78 61.0991%
137162 2014-10-20 18:43:29 2 6641.46782322 171.971 307161000 64.4745 4675.78 61.0995%
137161 2014-10-20 18:42:48 1 1000 170.843 307160000 64.4744 4675.78 61.0996%
137160 2014-10-20 18:42:27 1 1000 162.133 307159000 64.4744 4675.78 61.0997%
137159 2014-10-20 18:42:16 1 1000 29.857 307158000 64.4744 4675.78 61.0997%
137158 2014-10-20 18:41:59 1 1000 32.026 307157000 64.4744 4675.78 61.0998%
137157 2014-10-20 18:41:53 1 1000 34.314 307156000 64.4746 4675.78 61.0998%
137156 2014-10-20 18:41:40 1 1000 36.323 307155000 64.4746 4675.78 61.0999%
137155 2014-10-20 18:41:37 1 1000 38.742 307154000 64.4748 4675.78 61.0999%
137154 2014-10-20 18:41:34 1 1000 37.496 307153000 64.475 4675.78 61.0999%
137153 2014-10-20 18:41:18 1 1000 40.622 307152000 64.475 4675.78 61.1%
137152 2014-10-20 18:40:59 1 1000 43.965 307151000 64.475 4675.78 61.1001%
137151 2014-10-20 18:40:52 1 1000 47.547 307150000 64.4751 4675.78 61.1001%
137150 2014-10-20 18:40:48 1 1000 50.685 307149000 64.4753 4675.78 61.1001%
137149 2014-10-20 18:40:47 1 1000 54.738 307148000 64.4755 4675.78 61.1001%
137148 2014-10-20 18:40:43 1 1000 58.675 307147000 64.4757 4675.78 61.1001%
137147 2014-10-20 18:40:31 1 1000 63.313 307146000 64.4757 4675.78 61.1002%
137146 2014-10-20 18:40:03 1 1000 68.027 307145000 64.4756 4675.78 61.1003%
137145 2014-10-20 18:39:48 1 1000 72.557 307144000 64.4757 4675.78 61.1004%
137144 2014-10-20 18:39:44 1 1000 78.178 307143000 64.4758 4675.78 61.1004%
137143 2014-10-20 18:39:00 1 1000 83.37 307142000 64.4755 4675.78 61.1006%
137142 2014-10-20 18:38:55 1 1000 88.63 307141000 64.4757 4675.78 61.1006%
137141 2014-10-20 18:38:54 1 1000 92.904 307140000 64.4759 4675.78 61.1006%
137140 2014-10-20 18:38:49 1 1000 99.356 307139000 64.476 4675.78 61.1006%
137139 2014-10-20 18:38:33 1 1000 97.431 307138000 64.476 4675.78 61.1007%
137138 2014-10-20 18:37:44 1 1000 105.188 307137000 64.4757 4675.78 61.1009%
137137 2014-10-20 18:37:00 8 92544.22974057 118.799 307136000 64.4754 4675.78 61.1011%
137136 2014-10-20 18:36:42 3 8363.86928814 412.358 307135000 64.4763 4675.78 61.1006%
 
legendary
Activity: 952
Merit: 1000
October 21, 2014, 11:30:55 AM
Fuse, are those numbers of 24 in comparison with the other digishield coins and their target blocktimes? We have a target blocktime of 2,5 minutes. Those numbers should match up with the target blocktime, I think. I don't know if it was originally 24. So, I let it to Geert-Johan to react further, because I am not a programmer.  

But wanted to react as it seems plausible what you say.

Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

That would be nice as well, if needed.
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 11:22:48 AM

Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works,
as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse

I think you are correct on this. Sounds very plausible. But I am not a programmer. Hope it can be that easy to adjust the script  Smiley



Additionally, you could use a weighted average where the last 6 blocks carry more weight than the last 18.  So essentially you could still focus on the 24 blocks, but with more emphasis on the last 6.

-Fuse
legendary
Activity: 952
Merit: 1000
October 21, 2014, 11:17:04 AM

Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works,
as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse

I think you are correct on this. Sounds very plausible. But I am not a programmer. Hope it can be that easy to adjust the script  Smiley

legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 10:45:21 AM
@ny2cafuse

The idea is good, but it's not a complete fix. Multipools could still mine up to 11 blocks directly after a high diff block was mined (which probably took some hours).
This is because the actualTimespan is several hours, while the targetTimespan is 12*2.5=30 minutes. With this changes that would cause a diff x2 directly when the high diff block is >12 blocks ago in the chain (and not taken into calculation anymore). Because at that point, only blocks with a timespan of several seconds are in the DGW3 calculation, so the diff factor maxes out to 2.0

I think this is a step in the right direction, but there's more required for a full fix.

Then drop it even further to 6 blocks.  You need the retarget to happen faster than it's happening now, but not as severe.  With our current 3x jump, lets say that we go from 250 difficulty to 750 in a jump.  This knocks the multi off the chain until it drops back down.  But what's to say that a difficulty of 300 wouldn't have done the same?  And then if it isn't sufficient, it ramps up again, until it is.

Lesser changes that happen faster.  That's the way digishield works, as evident by almost any difficulty graph for digishield coins.  We just need to emulate that.

-Fuse
sr. member
Activity: 332
Merit: 250
October 21, 2014, 10:25:18 AM

Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?
No, what I meant is the fact that two seperate dampening solutions that should work pretty well jump around like crazy. That made me wonder, maybe its not the KGW or DGW code but something that precedes or supercedes it. You maybe could ask Rijk what exactly was done about the KGW problem back in April, because then KGW went haywire. It's just a wild guess, but it's strange that we have more problems with those jump pool than other coins.
sr. member
Activity: 409
Merit: 250
October 21, 2014, 09:59:33 AM
@ny2cafuse

The idea is good, but it's not a complete fix. Multipools could still mine up to 11 blocks directly after a high diff block was mined (which probably took some hours).
This is because the actualTimespan is several hours, while the targetTimespan is 12*2.5=30 minutes. With this changes that would cause a diff x2 directly when the high diff block is >12 blocks ago in the chain (and not taken into calculation anymore). Because at that point, only blocks with a timespan of several seconds are in the DGW3 calculation, so the diff factor maxes out to 2.0

I think this is a step in the right direction, but there's more required for a full fix.
legendary
Activity: 1582
Merit: 1002
HODL for life.
October 21, 2014, 09:39:37 AM
I believe this is the solution:

unsigned int static DarkGravityWave3(const CBlockIndex* pindexLast, const CBlockHeader *pblock) {
    /* current difficulty formula, darkcoin - DarkGravity v3, written by Evan Duffield - [email protected] */
    const CBlockIndex *BlockLastSolved = pindexLast;
    const CBlockIndex *BlockReading = pindexLast;
    const CBlockHeader *BlockCreating = pblock;
    BlockCreating = BlockCreating;
    int64 nActualTimespan = 0;
    int64 LastBlockTime = 0;
    int64 PastBlocksMin = 12;
    int64 PastBlocksMax = 12;

    int64 CountBlocks = 0;
    CBigNum PastDifficultyAverage;
    CBigNum PastDifficultyAveragePrev;

    if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || BlockLastSolved->nHeight < PastBlocksMin) {
        // This is the first block or the height is < PastBlocksMin
        // Return minimal required work. (1e0fffff)
        return bnProofOfWorkLimit.GetCompact();
    }
   
    // loop over the past n blocks, where n == PastBlocksMax
    for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
        if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
        CountBlocks++;

        // Calculate average difficulty based on the blocks we iterate over in this for loop
        if(CountBlocks <= PastBlocksMin) {
            if (CountBlocks == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
            else { PastDifficultyAverage = ((PastDifficultyAveragePrev * CountBlocks)+(CBigNum().SetCompact(BlockReading->nBits))) / (CountBlocks+1); }
            PastDifficultyAveragePrev = PastDifficultyAverage;
        }

        // If this is the second iteration (LastBlockTime was set)
        if(LastBlockTime > 0){
            // Calculate time difference between previous block and current block
            int64 Diff = (LastBlockTime - BlockReading->GetBlockTime());
            // Increment the actual timespan
            nActualTimespan += Diff;
        }
        // Set LasBlockTime to the block time for the block in current iteration
        LastBlockTime = BlockReading->GetBlockTime();     

        if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
        BlockReading = BlockReading->pprev;
    }
   
    // bnNew is the difficulty
    CBigNum bnNew(PastDifficultyAverage);

    // nTargetTimespan is the time that the CountBlocks should have taken to be generated.
    int64 nTargetTimespan = CountBlocks*nTargetSpacing;

    // Limit the re-adjustment to 2x or 0.5x
    // We don't want to increase/decrease diff too much.
    if (nActualTimespan < nTargetTimespan/2)
        nActualTimespan = nTargetTimespan/2;
    if (nActualTimespan > nTargetTimespan*2)
        nActualTimespan = nTargetTimespan*2;


    // Calculate the new difficulty based on actual and target timespan.
    bnNew *= nActualTimespan;
    bnNew /= nTargetTimespan;

    // If calculated difficulty is lower than the minimal diff, set the new difficulty to be the minimal diff.
    if (bnNew > bnProofOfWorkLimit){
        bnNew = bnProofOfWorkLimit;
    }
   
    // Some logging.
    // TODO: only display these log messages for a certain debug option.
    printf("Difficulty Retarget - Dark Gravity Wave 3\n");
    printf("Before: %08x %s\n", BlockLastSolved->nBits, CBigNum().SetCompact(BlockLastSolved->nBits).getuint256().ToString().c_str());
    printf("After: %08x %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

    // Return the new diff.
    return bnNew.GetCompact();
}


Essentially we need the difficulty change to happen faster, but it would be a lesser change.  Calculating the difficulty on 24 blocks would mean at least 2 HIGH difficulty blocks.  That jacks up the moving average.  The difficulty on average would be higher, but it should even out a little bit better.  At least in my mind that's how it would work.

-Fuse
sr. member
Activity: 1701
Merit: 308
October 21, 2014, 09:03:15 AM
What I can't understand is why the advice of experienced community members whose expertise is in mining and network security is being ignored, and instead the dev team and community keeps running simulations and chasing some elusive solution.

The Criptoe group is a mining group, between the 5 of us, we have nearly 10 years of mining experience using CPU's, GPU's, and ASIC's across a broad range of coins. We have been involved behind the scenes with a number of coins to test and implement code. I have successfully executed time warp, double spends, and 51% attacks against several coin networks to better learn how to keep a coin network secure and healthy. Criptoe is not a loud and flashy outfit, but we all have real life experience with mining crypto-currency.

This is not a 'told-you-so' or a 'we-are-so-good' post, but the dev team did approach Criptoe to set-up a NLG pool shortly after the launch of NLG. I have been mining NLG since it was a difficulty of 2 or so. I have sold less than 10% of all the NLG I have mined. I am most likely the largest stakeholder of NLG in the world. I am still mining and buying NLG. I have a very vested interest in seeing NLG succeed. And I believe that NLG will succeed for all the same reasons each of you do.

I am not active on the Gulden Forum nor have I joined the 1680 Club, because I don't have time to visit another forum. Just like I don't have time for Twitter, Facebook or any other social media.

Review the posts of Criptoe members, Fuse, JohnDec2, and myself in particular, and you find that our advice was not heeded and we were quite confident in the current outcome.

The problem is being over complicated and confused. Give those of us with experience the opportunity to help and I am pretty confident that a solution can be found with minimal pain and delay.

EDIT - the difficulty on the network just jumped from the high 200's to 699... the network will be unusable for a few hours now... meaning that no transactions will be able to be executed or confirmed outside of LitePaid...


24kilo I am sorry we give you the feeling you are being ignored, we are not ignoring anyone. Of course we welcome you and the other devs to help us find a solution. Let's continue the following discussion about possible solutions.

replied inline in green


Rejecting blocks
This sounds like a good idea, but I'm afraid it can be bypassed fairly easy. A multi-pool could implement some software to premine blocks with future timestamps, and broadcast those future blocks when the time is right (while the actual mining machines are working on different blocks on a different coin). So in the end, it will still result in multi-pools getting a lot of blocks.
Right now a jumpool gets 80+% of the coins in a couple of minutes. Even if such pool mines in advance and drops the block after the timeout expires and we asume a timeout of 75 seconds he could not mine more than 1 block every 75 seconds and thus his gains lower dramaticaly. He would have to adjust his software to act as you described but always have 10 fold less profitaility as other coins. The premining of blocks can be further prevented by randomising the timeout. If he mines in advance he can drop the block till it's accepted but cannot mine the next because he does not know in advance if he's at the right time, if another miner was lucky to be the first, hashing in advance is useless because the next block would not match.So  thats not a problem the real problem can be higher vulnerability to timewarp and a proper calculation has to be done for that
How would you randomise the timeout? The consensus algorithm for bitcoin/guldencoin is that the miner creates the blocks, and the data is confirmed by all other nodes. So the miner would decide the 'random', and no-one could check if it is truly random. One could use the block hash as input for the timeout time calculation, that would make it pseudo-random I guess.. But in both cases: the miner knows the new timeout, and can directly start mining the next block, he doesn't even have to broadcast the first block yet (giving other miners a huge disadvantage). In the end, a multi-GH/s pool can mine a 'perfect' row of blocks in advance, while the dedicated miners have just a few blocks, and will lose in a chainsplit.

Time based block reward
When the block reward is based on the seconds that have past since the previous block, it will discourage multi pools to mine a lot of blocks at once, as it won't be profitable since the reward will be low.
For example: 150 seconds since last block would reward 1000 NLG and 15 seconds would reward 100 NLG.
As pointed out by ny2cafuse: the downside is that "lucky" blocks will have a smaller reward. On the other side, we could change the max block reward to 2000 NLG (rewarded at a block being 300 seconds after the previous block). This means that in the end exactly 1000 NLG per 150 seconds would be generated, regardless of "lucky" or "bad-luck" blocks or jumping pools. To me it's still unclear if there are exploits with this approach (for instance: invalid block times to receive more coins).
Like I said before NO limits. If there is a very tough block it is a huge incentive to gain thousands of NLG to mine it. Limiting does affect the coin supply. Here again the small miners benefit because they also can have the jackpot when their pool finds a tough block. Okay the luck factor changed from lucky to find a NLG 1000 block in three seconds to finding a NLG 10000 block in 15 minutes, but it's still there. There maybe a drawback however, right now clever stops at high diff but in this case he can wait till the blocktime is over 150+ seconds and wham mines that block fast, waits, leaves the easy low value for the small miners and hits again when the blocktime goes over 150some secs.
I think there should be a limit but it should be higher, maybe 10.000 NLG for a block that is 1500 seconds after previous block. The reason I want a limit at some point is because otherwise a huge pool could exploit the system by creating a huge diff (getting small miners stuck), waiting a few hours, then mining that one huge-diff block worth 100.000 NLG.


Beside all these solutions I still think the problem is the way DGW3 is implemented. It just does not react the way it should and that has nothing to do with being designed for... Because its a counting algo based on the last 24 blocks. One thing that also keeps me busy is the fact that KGW did not work properly either so maybe the solution is at some totally other place in the code. You sometimes see the same strange behaviour when variables are used outside the memory space and overwritten by some other procedure
DGW3 was not modified, except for some code comments and alignment. It looks like DGW3 is calculating correctly, the problem is that it was not created for jump pools at this scale. I have been thinking about increasing the number of blocks into calculation (currently 24). But I'm affraid that it will only result in multipools getting more blocks before the diff spikes. What do you think?

It would be great to get 24kilos or one of the criptoe crews comments on the above. I also want to see a good solution for Guldencoin as it's the only altcoin I invest in outside of bitcoin.
sr. member
Activity: 1701
Merit: 308
October 21, 2014, 07:10:36 AM
Sorry, but I am getting the feeling Guldencoin's planning and decision making is influenced a lot by the (foreign) Mining pools.

Is that really what the Guldencoin team wants?

Exactly the same issue I have with this as well. I started renting hashing power again (~2GH/s now) to combat this. It worked before my "forced" leave, it will work again (check price ramp up from 28 sept to 5 Oct). If more people would do it this, the whole switched mining issue would be non-existent. My last batch of mined NLGs via this way would sell for 630 sat to break even... that is not so much (especially when clevermining isn't dumping so much anymore).

Good one Mike! Some of us are looking into that as well. I also purchased a couple of the new Zeus Miners, expect LTEX on top of the pools again soon ;-)

Maybe we could setup a dedicated 1680-pool for NLG. "Fearless miners ONLY!"  Grin

This could help a lot until the price rises beyond a point that the added hash rate from dedicated miners is not good enough, but for short term it's a great solution.
sr. member
Activity: 1701
Merit: 308
October 21, 2014, 07:07:37 AM
Review the posts of Criptoe members, Fuse, JohnDec2, and myself in particular, and you find that our advice was not heeded and we were quite confident in the current outcome.

The problem is being over complicated and confused. Give those of us with experience the opportunity to help and I am pretty confident that a solution can be found with minimal pain and delay.

Good post 24Kilo. We need experienced people. We will solve this networkissue with you guys!

We will come stronger out of this and good things to come, don't forget.

Hi 24Kilo, we welcome you to add to the solution! I have read your earlier posts more careful than I did before and you are absolutely right about your above statement.

Although the 500K won't impress you that much, we hope you will help us out, so we can create one of the most amazing coins in Crypto to date!

It's funny how life is, when you finally find a coin that is the truth and actually has the brightest of futures it ends up with a real issue. Luckily this one can be solved it would seem with a effort from the experienced miners and development team. I would say don't be shy to get other trustworthy and experienced developers to have a look. Since we using dgw3 maybe the darkcoin dev could review the algorithm implementation?
legendary
Activity: 1023
Merit: 1000
ltex.nl
October 21, 2014, 06:39:48 AM
Review the posts of Criptoe members, Fuse, JohnDec2, and myself in particular, and you find that our advice was not heeded and we were quite confident in the current outcome.

The problem is being over complicated and confused. Give those of us with experience the opportunity to help and I am pretty confident that a solution can be found with minimal pain and delay.

Good post 24Kilo. We need experienced people. We will solve this networkissue with you guys!

We will come stronger out of this and good things to come, don't forget.

Hi 24Kilo, we welcome you to add to the solution! I have read your earlier posts more careful than I did before and you are absolutely right about your above statement.

Although the 500K won't impress you that much, we hope you will help us out, so we can create one of the most amazing coins in Crypto to date!
Jump to: