Pages:
Author

Topic: [ANN] Catcoin - 0.9.1.1 - Old thread. Locked. Please use 0.9.2 thread. - page 57. (Read 131010 times)

hero member
Activity: 588
Merit: 501
Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

30s is significatly shorter but 6 min is not, we are talking about the same order magnitude (60%) while 30s is obviously significally shorter(5%), You are just being nitpicking I'm sure if it was 8min instead of 6 you would have said the same thing, also I can return the same question to you, how many new scrypt altcoin you can count that have a 10 min block time? You have your answer...


Actually, the % limit doesn't help much at all with convergence, even if you assume a fixed hashrate, because the difficulty spends so little time in the narrow band where there is usable feedback: Here's a graph projecting difficulty with fixed network hashrate of 300 MH/s:


It does, with the current variation ( the current interpolation can either interpolate polynomialy example Chebyshev polynomial or even if we consider a linear interpolation for non numerical mathematics initiated = straight lines) the diff will converge.

Simplified explaination : lets take the example of when the diff is increasing, the 12% limite makes move in 12% steps per block now lets imagine we have said hashrate that it is supposed to push us to diff = 100, the diff will start increasing in 12% steps trying to reach that diff, but at some point when the diff reach a certain level lets say diff=60 the coin is no more profitable and the profitabilty pools leave so the peak we've got is diff 60 instead of 100 thanks to the fact that diff increased in limited steps and the diff didn't reach the top value it was supposed to reach thanks to diff retarget each block (since with the next block the diff target will be significally lower and the diff will start decreasing). Same thing will happen when we are going downwards when the coin will reach a certain point where is profitable again (in 12% steps) lets say diff 20 (while if there was no limite the diff should have gone down to 1 for example due to low hashrate) so each time you'll have the coin bouncing with smaller minimum and maximimum and converging towards the diff (limite) on which the coin is profitable (No this doesn't take into consideration the 36 average and this is what's make everything jerky in addation to everything I mentioned before). So maybe removing the 36 block avergae can solve the issue if the other parameters don't swing in a major way


I've included this in my model. As difficulty goes up, I reclaculate hashrate dropping off exponentially, which is a reasonable assumption if you look at the pool hashrates here: http://cat.coinium.org/index.php?page=statistics&action=graphs

In my graph on the previous page I assumed that 1GH/s worth of profit pools would jump in at any difficulty below 45, which is about where the current price/profitability point is. That's a bit arbitrary, but as the graph shows after 4 cycles the difficulty just stays very slightly above profitability... which is exactly what we want. It would do the same if we assumed profitability occurred at 30 or 50 or 100.

Sadely this is a wrong assumption and I understand the reason behind this, but it doesn't work like that, Consider the huge part of the increased hashrate from profitability pools as an On/Off Value because these pools switch instantly all their hashrate from a coin to another (switching ports dynamically for everyone at the same time) there is a minority that switchs coins manually while it's random it can be assumulated or interpolated as an exponentiel function
sr. member
Activity: 364
Merit: 250
envy: The relative weighting of the 12% limit is something I've been thinking about. We should try that.
full member
Activity: 168
Merit: 100
This is not true KGW was used in different coins from long to midium to very short block target time example, MMC has a 6min block time, Anon is 3min Mega is 2.5min Fox 1 min, Franko 30s.
Those are still significantly shorter than CAT. I'm not saying it wouldn't work, but it would need as much testing as any other solution, and is much more complex.

Quote
As for your graph, that's how the limite x% percentage is supposed to work in theory it pushes the coin to converge, but it doesn't work like that due to other exterior parameters,
Actually, the % limit doesn't help much at all with convergence, even if you assume a fixed hashrate, because the difficulty spends so little time in the narrow band where there is usable feedback: Here's a graph projecting difficulty with fixed network hashrate of 300 MH/s:



Quote
for example Profitability pool does not have a fixed hashrate, they can hit one time with an average hashrate, but another with peak hashrate, and make the diff peaks again, these pools don't mine for a fixed time/till a fixed diff, they keep mining as long as the coin is reasonnably profitable...ect ect hence the need of a dynamic function that will adapt to those conditions

I've included this in my model. As difficulty goes up, I reclaculate hashrate dropping off exponentially, which is a reasonable assumption if you look at the pool hashrates here: http://cat.coinium.org/index.php?page=statistics&action=graphs

In my graph on the previous page I assumed that 1GH/s worth of profit pools would jump in at any difficulty below 45, which is about where the current price/profitability point is. That's a bit arbitrary, but as the graph shows after 4 cycles the difficulty just stays very slightly above profitability... which is exactly what we want. It would do the same if we assumed profitability occurred at 30 or 50 or 100.
hero member
Activity: 657
Merit: 500
50 CAT bounty for the first post that shows that KGW was or is used successfully in a coin with a 10 minute or longer block time.


Disclaimer:  This is my money, not CATs that belong in any way to development or that have been donated by anyone for any reason.
hero member
Activity: 588
Merit: 501
hero member
Activity: 657
Merit: 500
Everyone - Coinium has more than 51% of the hash - please move miners to avoid a fork.
hero member
Activity: 588
Merit: 501
full member
Activity: 168
Merit: 100
The only remaining problem with the difficulty adjustment is that it doesn't "know" the difference between being off by 2000% and being off by 20%. It serves up the same 12% change for both scenarios, when they should really be treated much differently.

KGW and the double-moving-average are both solutions which would probably work well, but both are complex (and as far as I can tell have not been applied to a long blocktime coin like CAT).

I would like to throw another possible solution out there which is extremely simple (one line of code), readily understandable, and responds well to hash attacks in my simulations. It doesn't change anything about the coin except how it approaches the 12% limit. Block time, max change, etc. all remain the same.

That solution is an exponentially weighted average, which instead of simply dividing the target blocktime by the average blocktime to get the change amount, takes the logarithm of that division. In pseudocode:

Modified_Actual_36_Block_Time = Target_36_Block_Time + Target_Blocktime*NaturalLog(Target_36_Block_Time/Actual_36_Block_Time)

When the actual blocktime is 5 minutes (50% of target), the retarget only goes up 1.93%. More examples are shown in the table here:



The modeled system response to 1 GH/s attacks every time the difficulty drops below 45 is shown in the plot:



The actual code implementation is to add one line:

Code:
 if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

to:

Code:
 if(pindexLast->nHeight >= fork2Block){
        numerator = 112;
        denominator = 100;
    }
    int64 nActualTimespan = pindexLast->GetBlockTime() - pindexFirst->GetBlockTime();
    nActualTimespan = nTargetTimespanLocal+nTargetSpacing*log(nTargetTimespanLocal/nActualTimespan)
    int64 lowLimit = nTargetTimespanLocal*denominator/numerator;
    int64 highLimit = nTargetTimespanLocal*numerator/denominator;
    printf("  nActualTimespan = %"PRI64d"  before bounds\n", nActualTimespan);
    if (nActualTimespan < lowLimit)
        nActualTimespan = lowLimit;
    if (nActualTimespan > highLimit)
        nActualTimespan = highLimit;

The limits would still kick in at 12%, which would happen when the actual time is around 100x or 0.01x the target time.
newbie
Activity: 17
Merit: 0
http://i.imgur.com/Z8tw1VB.png

So... a new fork coming? They clearly have >51%.
Network hashrate is ~300Mh/s and theirs is almost 200Mh/s so they have almost 2/3 of the entire network (~ 65%) which is A LOT.

And also Team CatCoin pool (has around 80Mh/s so ~ 25% of network) is somewhat stuck with stats like this:
PPLNS Target: 195812
Est. Shares: 156687 (done: 302.82%)
Pool Valid: 474480

This is even worse than Bitcoin at the moment with cex.io having almost half of the network...
member
Activity: 196
Merit: 10
sr. member
Activity: 364
Merit: 250
limiting improvements to what we can all understand as to code is stupid. All contribute what we can, a undermine, others promote currency and other program the code.

Excuse my English is not my native language.

Translation:

Limiting our code development to the weakest link (aka those who do not understand code) is akin to dumbing ourselves down. We all contribute, let's not limit our potential by making it understandable (at least in code format) for anyone. The layperson can still understand code if it is explained properly.


Code is nothing more than the expression of an idea, a talented coder can turn any idea into a function. So whether it is understandable in code is irrelevant, we must take the best idea and transform it into code, assuming that is possible. If it isn't we should be able to explain why.

The problem is the limited amount of time that coins have to make changes due to the consensus based design and the trainwreck structuring of the code. This slowly distances the vision that coders have from the vision that the users have. Soon as the vision dies, the coin becomes exchange dust. That's where it lives instead of being used and improved. Finally, there ends up so little evolution, that parasites in Congress think they understand it enough to hijack it. Not on my watch.
full member
Activity: 120
Merit: 100
catcoin, up up up!
hero member
Activity: 657
Merit: 500
Just for grins, here are a couple of scattered thoughts in no apparent order.  Hopefully the pictures will make up for the lack of organization. Wink

A number of folks here as well as a number of coin devs are enamored with the Kimoto Gravity Well (KGW).  So far, the coins I've found that use this have fairly fast block times.  The longest is Anoncoin's 3.42 minutes.  Others, from Megacoin through Franco, have block times from 30 seconds through 2.5 minutes.    Here's Megacoin with its 2.5 minute block time:

https://coinplorer.com/Charts/Difficulty/MEC



It's pretty clear that this method allows difficulty to track the hash rate fairly well.  Let's look at another coin with a 2.5 minute block time - Phoenixcoin:



Let's skip forward in time for Phoenixcoin - here's a look at performance after the Oct, 2013 fork:




This fork is when the team there moved to an average of 100 and 500 block SMA, 10% damper, 20% limit, a 1 block retarget.  I don't know if this can be adjusted to work for CAT, but I hope it shows that KGW is not the only way to.

So - all you smart folks in CATlandia - thinking caps people - we need you! Wink
hero member
Activity: 657
Merit: 500
Envy - will you please tell us the assumptions you're using and/or give an example of what you're 'feeding' your model?  I'd like to understand what environment you're working in.  Thanks in advance!

Absolutely. I'm just running in Excel with a row per block, since I don't have any loops or complex logic; I can move the model to Matlab if it gets any more complex, but haven't seen any need so far.

Thank you very much!
member
Activity: 70
Merit: 10
limiting improvements to what we can all understand as to code is stupid. All contribute what we can, a undermine, others promote currency and other program the code.

Excuse my English is not my native language.

Translation:

Limiting our code development to the weakest link (aka those who do not understand code) is akin to dumbing ourselves down. We all contribute, let's not limit our potential by making it understandable (at least in code format) for anyone. The layperson can still understand code if it is explained properly.


Code is nothing more than the expression of an idea, a talented coder can turn any idea into a function. So whether it is understandable in code is irrelevant, we must take the best idea and transform it into code, assuming that is possible. If it isn't we should be able to explain why.
member
Activity: 196
Merit: 10
limiting improvements to what we can all understand as to code is stupid. All contribute what we can, a undermine, others promote currency and other program the code.

Excuse my English is not my native language.
hero member
Activity: 657
Merit: 500
You know, everyone, as much as I admire open source, distributed, global teams working together for the betterment of all, what we really need right now is to sit around a huge table with pints all around, and bets on the outcome of a programmers VS non-programmers dart match.  LOL

Enjoy your evening!

sr. member
Activity: 364
Merit: 250
kuroman, I'm not trying to be nasty, but I feel as if some people see this project as a piece of software with peculiar switches and knobs and they think that's where it ends.

Unfortunately, it's not. The poorly structured code is inhibiting evolution and innovation. We need something that doesn't smell like 1995.
sr. member
Activity: 364
Merit: 250
So Zerodrama you are quoting that message because you relate to it right? so you don't understand C...

Nope. I've edited linux modules to allow servers to be hidden behind invisible routers. I can do C/C++, PHP, Javascript, Lisp, and a few others.
This is about the rest of the community not understanding the code. You keep thinking only coders should understand the code.
Not on my watch.

As for examples and graphs, I've been in the trenches trying to help pull this code out of obscurity and into the light. All I've seen as far as Kimoto is the code snippet you posted. I haven't seen anything that talks about what EventHorizon and PastMass mean. If the COMMUNITY has no idea what they mean, that's no good. It doesn't matter if you or I know what it means. If the users don't know, they can't contribute. It separates them from the developers. That's no good for a coin's survival.
hero member
Activity: 588
Merit: 501
Pages:
Jump to: