Pages:
Author

Topic: [ANN] Catcoin - 0.9.1.1 - Old thread. Locked. Please use 0.9.2 thread. - page 56. (Read 131010 times)

full member
Activity: 120
Merit: 100
Anyone could explain why the price keep going down?
full member
Activity: 168
Merit: 100
kuro, I don't think that the community is going to reach a consensus on a copy/paste implementation of KGW, for a couple reasons:

1: it seems hacked together, in that some of its functionality is (looks?) redundant to other code, and overall is styled entirely differently from the rest of the code; this could be fixed by a good integration.

2: the function isn't well understood by the community, and the perception of black-box in an open-source project isn't going to sit well. I don't claim to understand it myself. I know its an exponential smoothing that puts a lot of weight on recent blocktimes, but "EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228))" has a lot of hardcoded constants for no clear reason.

3: the effect the function will have on Catcoin isn't obvious... we don't even have any estimations other than "well it works good on other coins". Catcoin has a slow block time and large block reward relative to other coins; is KGW going to kick the diff up fast enough under hash attack? Will it drop fast enough after the attack? Which of those constants need to be modified if tests show that it doesn't work quite right for Catcoin?

I think it has potential, but the next steps to getting it to community acceptance are to break down the guts at a fundamental level, at least to a point where we can get some kind of model for it working, or pseudocode so its easier to understand for those not advanced in C programming.

If you can put together a simple model, even just showing its time-response to a step or impulse change off a baseline constant hashrate (e.g. constant 50 MH/s stepped to constant 400 MH/s), I would love to see that. Or just step through the function and point out what the hardcoded constants and unique variables are doing, so others can replicate it.
hero member
Activity: 657
Merit: 500
50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
50 CAT bounty for the first post that shows that the Kimoto Gravity Well (KGW) was or is used successfully in a coin with a 10 minute or longer block time.

Anyone?

This was answered before check out my reply above.

Thank you for your input, kuroman.  When it comes to code, precision is important.  I asked for a positive report, not a negative.  These are not the same.

Your gravity well proposal is on the table.  As far as I'm concerned, the only thing that will get it removed from consideration would be an onslaught of KGW cultists trying to push all other options aside.  The battle will be fought later on the testnet, not as a war of words in this forum.
hero member
Activity: 588
Merit: 501
50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
50 CAT bounty for the first post that shows that the Kimoto Gravity Well (KGW) was or is used successfully in a coin with a 10 minute or longer block time.

Anyone?

This was answered before check out my reply above.

1- There is no recent scrypt coin that was released with 10 min block time
2- KGW Works perfectly for coins from 30s to 6 min so by extrapolation and by the application of whatever probability law you want You have around 90% that KGW will work AS IS without even a single tweak for Catcoin, and the probability goes to 99% if you tweak the parameters.

You are just nitpicking right now trying to descridit KGW as a solution by whatever mean because there is no other reason for such a bounty other than that since you already know the answer.

The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.
No it is not, Tell me which difference is greater : the difference between 6 min to 10 or from 30s to 6min?  KGW proven to work for everycase scenario from 30s to 6min. By any law of probability you want to use there is 80-90% chance it will work from the get go with 10min blocktime

There is a huge difference in dynamics.

As an example:
5, 25, 125, 625
10, 100, 1000, 10000

You assume 6 is close to 10, when in fact it is not.

For example when we do the limit, almost everyone assumes we are adding and subtracting (OMG TWO VALUES). We are multiplying. That means flipping the fraction not the sign.

I know it's not directly accessible to most people, but compounded effects, which Kimoto has, are extremely divergent at small changes.

Also regarding the copying and pasting of code:
They copied it because they are lazy.
And it is not sufficient to offer constants simply because they work.
We need to know WHY they work.

This isn't a cute little video game app or a messenger app. This is a massive network consensus based technology.

You are totally wrong, to judge if 6 is close to 10 you need a scale or référence, the reference is KGW works for everything from 30s to 6min perfectly the difference between 30s and 6min is 12x while from 6 to 10 is x1.666....
So Yes 6 is very close to 10 in our case and in our scale. I'm looking forward to your next pseudo argument (math is absolute).
sr. member
Activity: 364
Merit: 250
The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.
No it is not, Tell me which difference is greater : the difference between 6 min to 10 or from 30s to 6min?  KGW proven to work for everycase scenario from 30s to 6min. By any law of probability you want to use there is 80-90% chance it will work from the get go with 10min blocktime

There is a huge difference in dynamics.

As an example:
5, 25, 125, 625
10, 100, 1000, 10000

You assume 6 is close to 10, when in fact it is not.

For example when we do the limit, almost everyone assumes we are adding and subtracting (OMG TWO VALUES). We are multiplying. That means flipping the fraction not the sign.

I know it's not directly accessible to most people, but compounded effects, which Kimoto has, are extremely divergent at small changes.

Also regarding the copying and pasting of code:
They copied it because they are lazy.
And it is not sufficient to offer constants simply because they work.
We need to know WHY they work.

This isn't a cute little video game app or a messenger app. This is a massive network consensus based technology.
hero member
Activity: 657
Merit: 500
How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...

Haha sure. Just pointing out that a one liner might be feasible.
Agreed.   Just trying to keep the gravity well folks from lighting their torches.  No holy wars yet. LOL
hero member
Activity: 657
Merit: 500
50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
50 CAT bounty for the first post that shows that the Kimoto Gravity Well (KGW) was or is used successfully in a coin with a 10 minute or longer block time.

Anyone?
full member
Activity: 168
Merit: 100
How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...

Haha sure. Just pointing out that a one liner might be feasible.
hero member
Activity: 657
Merit: 500
How about this, everyone - first, we put options on the table.  Then we arm wrestle, fight, measure size, etc.  Then we code some finalists and get them on testnet.  Then we choose one, code it, verify the code, and implement it.

As I see it, we're in phase one:  Fill the table.  This is not the time to start arm wrestling. Wink

peace...
full member
Activity: 168
Merit: 100
Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.



This comparaison is biased, because you don't remove the unknown defintion part from KGW or add them for other codes (they are defined at the begining of the code if I'm not mistaking)

Fear not, Kuroman - I'm not interested in a code length contest and don't think it should be a factor.  None of the solutions or attempted solutions are 1 line of code. Wink

Actually, I proposed a possible 1-line exponential weighted average solution that uses only previously defined variables and functions. Not that code complexity should be the biggest factor, but it's something that should be considered...

The actual code implementation is to add one line:

Code:
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;

to:

Code:
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    nActualTimespan = nTargetTimespanLocal+nTargetSpacing*log(nTargetTimespanLocal/nActualTimespan)
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
full member
Activity: 168
Merit: 100
We should implement KGW if it's shown to be the best solution, after considering all options. Its not going to be a copy/paste solution but could certainly be done, and there is no hurry for the next update.

no the Average is making the matter worse right now as It was explained before, does removing it will solve the issue, it will depends on the other parameters, if the other parameters don't swig,= stay in a constant level, it will most likely solve the problem, but that is a BIG assumption, hence a dynamic value instead of a 12% which negate the effect of the other parameters.

Block solving is random, so the difficulty HAS to be based on some sort of an average. Obviously the 36SMA isn't perfect, but it's working now and could be very good with very minor modifications.

It will converge and I agree with you here, and this is what the current fork was supposed to mean, what you aren't taking in consideration is other parameters as I mentioned before (and obviously I was refering to exponentional decrease you've mentioned which is a wrong assumption)

I'm no longer sure it will converge. I can't get it to in my sims, I only get more of the same no matter what the hash input is.

And the function I've found works BEST (not perfect) by experimentation is: nethash = 2681*EXP(-0.059*current_diff)+40. Go pull some historical difficulties and the corresponding hashrates, and see that it's a reasonable approximation. There is certainly a non-linear correlation between difficulty and nethash.

Also Envy thank you for your effort, I'm not discrediting your work by any mean just giving you my honest analysis and bringing facts to back them up, I feel like if we keep discussing and simulating different case scenarios we will be able to find a possible solution, but for now I believe we can use KGW as it is available and prove to work for everycoin it was used on. I personally believe the dev process should not be limited to solving major issues but it should a continius work to keep improving the coin.

The next update to the coin is not a rush because its already functioning well enough to get by. I agree that it should and will be a continuous improvement process.
hero member
Activity: 657
Merit: 500
Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.



This comparaison is biased, because you don't remove the unknown defintion part from KGW or add them for other codes (they are defined at the begining of the code if I'm not mistaking)

Fear not, Kuroman - I'm not interested in a code length contest and don't think it should be a factor.  None of the solutions or attempted solutions are 1 line of code. Wink
member
Activity: 70
Merit: 10


Cryptsy is now in sync.
hero member
Activity: 588
Merit: 501
hero member
Activity: 657
Merit: 500


So... a new fork coming? They clearly have >51%.
Network hashrate is ~300Mh/s and theirs is almost 200Mh/s so they have almost 2/3 of the entire network (~ 65%) which is A LOT.

And also Team CatCoin pool (has around 80Mh/s so ~ 25% of network) is somewhat stuck with stats like this:
PPLNS Target: 195812
Est. Shares: 156687 (done: 302.82%)
Pool Valid: 474480

This is even worse than Bitcoin at the moment with cex.io having almost half of the network...
Don't put a ton of stock in this chart - it doesn't include all the pools or any solo miners.  Just participate in a crowd-sourced attempt to keep things in better balance, that's all.  THANKS!
hero member
Activity: 588
Merit: 501
The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.
No it is not, Tell me which difference is greater : the difference between 6 min to 10 or from 30s to 6min?  KGW proven to work for everycase scenario from 30s to 6min. By any law of probability you want to use there is 80-90% chance it will work from the get go with 10min blocktime

I am very familar with dynamic systems theory, so an explanation is not necessary here. What is needed is proof to back up what you are saying. If that's how you think the system will behave, model it, show you inputs and results like I did.

And no, removing the trailing average completely won't help, because the goal is to stabilize an inherently unstable system, which is very difficult even if you know where it should be headed (which is the function of the average).
I've added the explaination for those who are not math initiated (And I don't know what people wants anymore, some complain about complex math or code others complain about explanations...)
The explaination prove it self on it own, it's just interpolation math. if you think it is wrong, feel free to point where it is and I would be happy to provide argumentation if needed, if you want a graphical representation I can do that aswell, although I've already did before with the consideration a one way limite (for increase and no limite for decrease) check out my previous comments, and no the Average is making the matter worse right now as It was explained before, does removing it will solve the issue, it will depends on the other parameters, if the other parameters don't swig,= stay in a constant level, it will most likely solve the problem, but that is a BIG assumption, hence a dynamic value instead of a 12% which negate the effect of the other parameters.


I have both the exponential function (manual switchers) AND the instant switching (auto pools) modeled. I'm throwing an extra 1000 MH/s at the simulation the block after diff goes under 45 (ON TOP of the exponentially increasing manual switchers), and keeping it there until diff goes over 45. 45 is an arbitrary approximation, but I can prove that the response is similar if profitability occurs at 30 or 50 or 100.


It will converge and I agree with you here, and this is what the current fork was supposed to mean, what you aren't taking in consideration is other parameters as I mentioned before (and obviously I was refering to exponentional decrease you've mentioned which is a wrong assumption)

Also Envy thank you for your effort, I'm not discrediting your work by any mean just giving you my honest analysis and bringing facts to back them up, I feel like if we keep discussing and simulating different case scenarios we will be able to find a possible solution, but for now I believe we can use KGW as it is available and prove to work for everycoin it was used on. I personally believe the dev process should not be limited to solving major issues but it should a continius work to keep improving the coin.
full member
Activity: 168
Merit: 100
envy: The relative weighting of the 12% limit is something I've been thinking about. We should try that.

The response isn't perfect, but it's pretty good from what I've seen -- and the implementation (just one line) is so simple that it couple be compiled and running on the testnet in minutes.

The response could be further tweaked by using different bases than e. Base 10 or 2 would be easy to try out.

Edit: from a quick look, base2 is underdamped, 2.5 and 3 look similar to e (not surprising), 5 is well damped and 10 is very well damped. Anything from e to 10 is worth a second look.
full member
Activity: 168
Merit: 100
Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

30s is significatly shorter but 6 min is not we are talking about the same order magnitude (60%) while 30s is obviously significally shorter(5%), You are just being nitpicking I'm sure if it was 8min instead of 6 you would have said the thing, also I can return the same question to you, how many new scrypt altcoin you can count that have a 10 min block time? You have your answer...

The dynamic response of a 6 min coin is significantly different from a 10 minute coin. We don't know how KGW will respond, therefore we have to test, just like any other solution. I never said it wouldn't work, just that its neither simple nor a copy/paste solution.

Quote
It does with the current variation ( the current interpolation can either interpolate polynomialy example Chebyshev polynomial or even if we consider a linear interpolation for non numerical mathematics initiated = straight lines) the diff will converge.

Simplified explaination : lets take the example of when the diff is increasing, the 12% limite makes move in 12% steps now lets imagine we have said hashrate that it is supposed to push us to diff = 100, the diff will start increasing in 12% steps trying to reach that diff, but at some point when the diff reach a certain level lets say 60 the coin is no more profitable and the profitabilty pools leave so the peak we've got is diff 60 instead of 100 thanks to the fact that diff increased in limited steps and the diff didn't reach the top value it was supposed to reach thanks to diff retarget each block same think will happen when we are doing downwards when the coin will reach a certain point where is profitable again (in 12% steps) lets diff 20 (while if there was no limite the diff should have gone down to 1 for example due to low hashrate) so each time you'll have the coin bouncing with smaller minimum and maximimum and converging towards the diff (limite) on which the coin is profitable (No this doesn't take into consideration the 36 average and this is what's make everything jerky in addation to everything I mentioned before). So maybe removing the 36 block avergae can solve the issue if the other parameters don't swing in a major way

I am very familar with dynamic systems theory, so an explanation is not necessary here. What is needed is proof to back up what you are saying. If that's how you think the system will behave, model it, show you inputs and results like I did.

And no, removing the trailing average completely won't help, because the goal is to stabilize an inherently unstable system, which is very difficult even if you know where it should be headed (which is the function of the average).

Quote
these pools choice instantly all their hashrate from a coin to another (switching ports dynamically for everyone at the same time)

I have both the exponential function (manual switchers) AND the instant switching (auto pools) modeled. I'm throwing an extra 1000 MH/s at the simulation the block after diff goes under 45 (ON TOP of the exponentially increasing manual switchers), and keeping it there until diff goes over 45. 45 is an arbitrary approximation, but I can prove that the response is similar if profitability occurs at 30 or 50 or 100.
hero member
Activity: 657
Merit: 500
Code complexity...

Kimoto Gravity Well code from the creator's coin - Megacoin.  Main.cpp, beginning on line 1276.  I cannot confirm this is all the code, just the main block I can easily identify.

https://github.com/megacoin/megacoin/blob/master/src/main.cpp

Code:
unsigned int static KimotoGravityWell(const CBlockIndex* pindexLast, const CBlockHeader *pblock, uint64 TargetBlocksSpacingSeconds, uint64 PastBlocksMin, uint64 PastBlocksMax) {
/* current difficulty formula, megacoin - kimoto gravity well */
const CBlockIndex  *BlockLastSolved = pindexLast;
const CBlockIndex  *BlockReading = pindexLast;
const CBlockHeader *BlockCreating = pblock;
BlockCreating = BlockCreating;
uint64 PastBlocksMass = 0;
int64 PastRateActualSeconds = 0;
int64 PastRateTargetSeconds = 0;
double PastRateAdjustmentRatio = double(1);
CBigNum PastDifficultyAverage;
CBigNum PastDifficultyAveragePrev;
double EventHorizonDeviation;
double EventHorizonDeviationFast;
double EventHorizonDeviationSlow;

    if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || (uint64)BlockLastSolved->nHeight < PastBlocksMin) { return bnProofOfWorkLimit.GetCompact(); }

for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
PastBlocksMass++;

if (i == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
else { PastDifficultyAverage = ((CBigNum().SetCompact(BlockReading->nBits) - PastDifficultyAveragePrev) / i) + PastDifficultyAveragePrev; }
PastDifficultyAveragePrev = PastDifficultyAverage;

PastRateActualSeconds = BlockLastSolved->GetBlockTime() - BlockReading->GetBlockTime();
PastRateTargetSeconds = TargetBlocksSpacingSeconds * PastBlocksMass;
PastRateAdjustmentRatio = double(1);
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
PastRateAdjustmentRatio = double(PastRateTargetSeconds) / double(PastRateActualSeconds);
}
EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228));
EventHorizonDeviationFast = EventHorizonDeviation;
EventHorizonDeviationSlow = 1 / EventHorizonDeviation;

if (PastBlocksMass >= PastBlocksMin) {
if ((PastRateAdjustmentRatio <= EventHorizonDeviationSlow) || (PastRateAdjustmentRatio >= EventHorizonDeviationFast)) { assert(BlockReading); break; }
}
if (BlockReading->pprev == NULL) { assert(BlockReading); break; }
BlockReading = BlockReading->pprev;
}

CBigNum bnNew(PastDifficultyAverage);
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
bnNew *= PastRateActualSeconds;
bnNew /= PastRateTargetSeconds;
}
    if (bnNew > bnProofOfWorkLimit) { bnNew = bnProofOfWorkLimit; }

    /// debug print
    printf("Difficulty Retarget - Kimoto Gravity Well\n");
    printf("PastRateAdjustmentRatio = %g\n", PastRateAdjustmentRatio);
    printf("Before: %08x  %s\n", BlockLastSolved->nBits, CBigNum().SetCompact(BlockLastSolved->nBits).getuint256().ToString().c_str());
    printf("After:  %08x  %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString().c_str());

return bnNew.GetCompact();
}

unsigned int static GetNextWorkRequired_V2(const CBlockIndex* pindexLast, const CBlockHeader *pblock)
{
static const int64 BlocksTargetSpacing = 2.5 * 60; // 2.5 minutes
unsigned int TimeDaySeconds = 60 * 60 * 24;
int64 PastSecondsMin = TimeDaySeconds * 0.25;
int64 PastSecondsMax = TimeDaySeconds * 7;
uint64 PastBlocksMin = PastSecondsMin / BlocksTargetSpacing;
uint64 PastBlocksMax = PastSecondsMax / BlocksTargetSpacing;

return KimotoGravityWell(pindexLast, pblock, BlocksTargetSpacing, PastBlocksMin, PastBlocksMax);
}


Code from Phoenixcoin's 4th fork (Oct 2013) where they implemented their difficulty fix.   Their method was to calculate the average difficulty from the last 500 blocks, calculate the average difficulty for the last 100 blocks, average those, then apply a 10% dampening.  Finally, allow a max 2% move each block.  I think I have the appropriate code, but might have missed something.  Beginning on line 937 of main.cpp:
https://github.com/ghostlander/Phoenixcoin/blob/master/src/main.cpp

Code:
    // Basic 100 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;
        nTargetTimespan *= 5;
    }


   // Extended 500 blocks averaging after the 4th livenet or 1st testnet hard fork
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nInterval *= 5;

        const CBlockIndex* pindexFirst = pindexLast;
        for(int i = 0; pindexFirst && i < nInterval; i++)
          pindexFirst = pindexFirst->pprev;

        int nActualTimespanExtended =
          (pindexLast->GetBlockTime() - pindexFirst->GetBlockTime())/5;

        // Average between the basic and extended windows
        int nActualTimespanAvg = (nActualTimespan + nActualTimespanExtended)/2;

        // Apply 0.1 damping
        nActualTimespan = nActualTimespanAvg + 9*nTargetTimespan;
        nActualTimespan /= 10;

        printf("RETARGET: nActualTimespanExtended = %d (%d), nActualTimeSpanAvg = %d, nActualTimespan (damped) = %d\n",
          nActualTimespanExtended, nActualTimespanExtended*5, nActualTimespanAvg, nActualTimespan);
    }

   // The 4th livenet or 1st testnet hard fork (1.02 difficulty limiter)
    if((nHeight >= nForkFour) || (fTestNet && (nHeight >= nTestnetForkOne))) {
        nActualTimespanMax = nTargetTimespan*102/100;
        nActualTimespanMin = nTargetTimespan*100/102;
    }




Catcoin's current 1 block retarget, 36 block SMA, 12% limit.  Not sure I have all of this...don't shoot me quite yet. Wink  Main.cpp, various sections after line 1045:

https://github.com/CatcoinOfficial/CatcoinRelease/blob/master/src/main.cpp

Code:
static const int64 nTargetTimespan = 6 * 60 * 60; // 6 hours
static const int64 nTargetSpacing = 10 * 60;
static const int64 nInterval = nTargetTimespan / nTargetSpacing;

static const int64 nTargetTimespanOld = 14 * 24 * 60 * 60; // two weeks
static const int64 nIntervalOld = nTargetTimespanOld / nTargetSpacing;


    // after fork2Block we retarget every block  
    if(pindexLast->nHeight < fork2Block){
        // Only change once per interval
        if ((pindexLast->nHeight+1) % nIntervalLocal != 0)


 // Go back by what we want to be 14 days worth of blocks
    const CBlockIndex* pindexFirst = pindexLast;
    for (int i = 0; pindexFirst && i < blockstogoback; i++)
        pindexFirst = pindexFirst->pprev;
    assert(pindexFirst);


    // Limit adjustment step
    int numerator = 4;
    int denominator = 1;
    if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

full member
Activity: 306
Merit: 100
HashFaster's Crypto-Mining Network

http://cat.hashfaster.com has upgraded to the latest wallet!!!

Come help us grow.

ZC,
Owner of HashFaster
Pages:
Jump to: