Author

Topic: [ANN][YAC] YACoin ongoing development - page 166. (Read 380060 times)

sr. member
Activity: 347
Merit: 250
You're a feisty one.

This is true.  I'm not going to dispute that.  Smiley


You're correct on the date, it was during the memcoin thread which was apparently November of last year, so I guess only six months old.  Guess my memory slipped, but things move quickly these days.

Thanks for the link, I'll have a read of your memcoin thread.  In a quick glance through, I'd say we're both in agreement YACoin's starting N was far too low, at least.


Quote from: WindMaster
If the scrypt-jane library starts having trouble at certain values of N (and it looks fine up through at least N=32768 that I've tested so far), I'd think we would only need to fix the problem in the library and release a new update to the client, rather than hard fork.

It works fine above that, I've used it beyond 32768 (see that thread).

Roger that.
sr. member
Activity: 406
Merit: 250
The cryptocoin watcher
As you can see from the table, it's going to be a couple decades before N reaches high enough that even today's typical desktop PC would start swapping.  Even in the year 2421, memory requirements only rise up to 256GB at N=30, and N won't go any higher than that (it's capped at 30).

Thanks! So even existing workstations (e.g.) can handle N=30.
member
Activity: 104
Merit: 10
I did some GPU testing with high N values. The gist is: at N=8192, I couldn't get it to output valid shares any more. Therefore, with current information, it seems that GPU mining will stop on 13 Aug 2013 - 07:43:28 GMT when N goes to 8192.

a) why do you think it stopped working at 8192 (physical GPU limitation with current gen GPUs, or more a limitation of the code that can be more easily overcome)?

I don't know yet. It just stops working on its own.

Quote
b) does CPU mining still work at 8192 and beyond? (if not, then we have a problem on our hand that would at minimum necessitate a hard fork I'd think).

Yes, in fact I tested CPU mining up to N=2^19=524288, where my puny Phenom x6 1055T got around 12 H/s.
sr. member
Activity: 347
Merit: 250
Since we are doing some Q & A, I've been wondering if an insane high N would (painfully try to) use swap memory.

Have a look at the table of N increments I posted:
https://bitcointalksearch.org/topic/m.2162620

The memory column shows how much memory is needed to compute a hash (assuming no TMTO shortcut is in use).  Total memory required is equal to that multiplied by the number of hashes you're trying to compute simultaneously.  For CPU mining, it would be the number in that table multiplied by the number of threads you're using.

As you can see from the table, it's going to be a couple decades before N reaches high enough that even today's typical desktop PC would start swapping.  Even in the year 2421, memory requirements only rise up to 256GB at Nfactor=30, and Nfactor won't go any higher than that (it's capped at 30).
legendary
Activity: 1484
Merit: 1005
I benched at around 100 ms/hash per thread with a 2700k when using 8 MB.  This was more than a year ago using scrypt jane, so maybe the code has been optimized. Also note that above you hash in the hundreds of hashes per second; I said that you would hit "double digits of H/s", making the difference one order of magnitude, not two.

Solid, within a year hashes will go from MH/s to double digit H/s  Roll Eyes  Network difficulty will drop like crazy and block reward will

You were off by a pretty large amount at both ends of the numbers you said hash rate would traverse over the course of a year.  However, with a more thorough calculation:

Best interpretation (most favorable to you) of your statement is that hash rates go from 1MH/sec to 99.999 H/sec over the course of the first year, a decrease in speed of about 10000x.  My benchmark showed a decrease of speed over the course of the first year of 358.77kH/sec to 0.606kH/sec with one particular common server CPU, a decrease in speed of about 592x.

10000 vs 592.  Or we could calculate that as 10000 / 592 = 16.9 and call that a single order of magnitude (plus change).  A strict interpretation of Wikipedia's "order of magnitude" article would suggest that this is the correct way.  So, I concede that you were off by a bit more than an order of magnitude if we analyze the most favorable interpretation of your statement.


I benched at around 100 ms/hash per thread with a 2700k when using 8 MB.  This was more than a year ago using scrypt jane, so maybe the code has been optimized.

Was the scrypt-jane library available somewhere prior to September 13, 2012?  If so, I'd like to snag a copy of that earlier version you benchmarked with.  Which hashing algorithm in the scrypt-jane library was that benchmark performed with?

You're a feisty one.  You're correct on the date, it was during the memcoin thread which was apparently November of last year, so I guess only six months old.  Guess my memory slipped, but things move quickly these days.

Quote
If the scrypt-jane library starts having trouble at certain values of N (and it looks fine up through at least N=32768 that I've tested so far), I'd think we would only need to fix the problem in the library and release a new update to the client, rather than hard fork.

It works fine above that, I've used it beyond 32768 (see that thread).
sr. member
Activity: 347
Merit: 250
b) does CPU mining still work at 8192 and beyond?

Benchmark and test run at N=32768 I ran and posted on the previous page to dispute TacoTime's claim hash rates will drop to double digits in a year:

(if not, then we have a problem on our hand that would at minimum necessitate a hard fork I'd think).

If the scrypt-jane library starts having trouble at certain values of N (and it looks fine up through at least N=32768 that I've tested so far), I'd think we would only need to fix the problem in the library and release a new update to the client, rather than hard fork.
sr. member
Activity: 406
Merit: 250
The cryptocoin watcher
Since we are doing some Q & A, I've been wondering if an insane high N would (painfully try to) use swap memory.
sr. member
Activity: 462
Merit: 250
I did some GPU testing with high N values. The gist is: at N=8192, I couldn't get it to output valid shares any more. Therefore, with current information, it seems that GPU mining will stop on 13 Aug 2013 - 07:43:28 GMT when N goes to 8192.

Very interesting news. Might there be some additional optimizations that you could do to get it to output (perhaps like messing with the lookup gap or some other TMTO hacks?)

I'm wondering two things:
a) why do you think it stopped working at 8192 (physical GPU limitation with current gen GPUs, or more a limitation of the code that can be more easily overcome)?
b) does CPU mining still work at 8192 and beyond? (if not, then we have a problem on our hand that would at minimum necessitate a hard fork I'd think).

Thanks for your insightful work on the GPU miner around YAC.
sr. member
Activity: 347
Merit: 250
why does n stay at 512 for only a few days?

The actual code that calculates the Nfactor isn't a nice smooth progression, it combines a couple calculations together that each cause Nfactor to change on different intervals.  The actual code in main.cpp looks like this, if curious:

Code:
int64 nChainStartTime = 1367991200;

const unsigned char minNfactor = 4;
const unsigned char maxNfactor = 30;

unsigned char GetNfactor(int64 nTimestamp) {
    int l = 0;

    if (nTimestamp <= nChainStartTime)
        return 4;

    int64 s = nTimestamp - nChainStartTime;
    while ((s >> 1) > 3) {
      l += 1;
      s >>= 1;
    }

    s &= 3;

    int n = (l * 170 + s * 25 - 2320) / 100;

    if (n < 0) n = 0;

    if (n > 255)
        printf( "GetNfactor(%lld) - something wrong(n == %d)\n", nTimestamp, n );

    unsigned char N = (unsigned char) n;
    //printf("GetNfactor: %d -> %d %d : %d / %d\n", nTimestamp - nChainStartTime, l, s, n, min(max(N, minNfactor), maxNfa

    return min(max(N, minNfactor), maxNfactor);
}
member
Activity: 75
Merit: 10
why does n stay at 512 for only a few days?

7      256      32kB      1369039776      Mon - 20 May 2013 - 08:49:36 GMT
8      512      64kB      1369826208      Wed - 29 May 2013 - 11:16:48 GMT
9      1024      128kB   1370088352      Sat - 01 Jun 2013 - 12:05:52 GMT



I appreciate all the discussion in this thread. and specifically the posts by windmaster.
legendary
Activity: 1205
Merit: 1010
It's good that yacoin did not change the 1 week average window of ppcoin's adjustment. So the attack can only change difficulty by ~2.5% for the next block, which may or may not be attacker's block.

This is not an exploit specific to the continuous difficulty adjustment of ppcoin. In bitcoin style step adjustment, an attacker can manipulate the timestamp of the last block of an adjustment interval. If attacker manages to mine the last block of an adjustment interval, the entire next interval's difficulty could be lowered. That's why one should stay with a longer adjustment window.

Bitcoin has +/- 2 hours maximimum block time offset since ages. I think it does not really matter much if block time is valid or not for as long as miner who added such block to
blockchain stays under 50% of network hashpower. Difficulty calculation takes block times in equation but one or few screwed blocks can't screw the result much. If some miner
would have 50% or more of network hashpower he could ruin difficulty calculation but he could ruin many other things as well, including much more important stuff than difficulty.

But make no mistake, some miner or few of them are playing with YAC block times on purpose. By putting block time in the future, he wants to affect difficulty calculation in such
a way that resulting difficulty is lower than it should be. Difficulty will "think" it took much longer than 60 seconds to find that specific block so it will adjust result downward, a bit.
legendary
Activity: 2772
Merit: 1028
Duelbits.com
YaCoin creator

I know that, but you said he used his alter ego. Smiley

So pocopoco should be someone else.

I don't have a proof for that but I'm fairly sure he is some other guy. He is obviously very knowledgeable and I don't think he appeared just like that.

Damn, I guess WindMaster might have some other nick too Smiley
hero member
Activity: 802
Merit: 1003
GCVMMWH
With all these copycats out there recently it's the only coin that took some time and skills to be made.

It's interesting and unique enough (as indicated by the speculation and debates it provides (such as those by WM and taco))  that I've kept an eye on it since day one. The only thing that the cookie cutter coins have provided is a point-and-shoot solution for the pump~dumpers. YAC has some substance, which - along with active development, should hopefully keep it viable for the long term.    
newbie
Activity: 56
Merit: 0
YaCoin creator

I know that, but you said he used his alter ego. Smiley

So pocopoco should be someone else.
legendary
Activity: 2772
Merit: 1028
Duelbits.com
YaCoin creator
newbie
Activity: 56
Merit: 0
So who is pocopoco?
legendary
Activity: 2772
Merit: 1028
Duelbits.com
pocopoco really made some kick ass coin, shame he used his alter ego for it and never looked back at it.

With all these copycats out there recently it's the only coin that took some time and skills to be made.
newbie
Activity: 56
Merit: 0
So people can GPU mine yacoin now?:O
member
Activity: 104
Merit: 10
I did some GPU testing with high N values. The gist is: at N=8192, I couldn't get it to output valid shares any more. Therefore, with current information, it seems that GPU mining will stop on 13 Aug 2013 - 07:43:28 GMT when N goes to 8192.
sr. member
Activity: 347
Merit: 250
I benched at around 100 ms/hash per thread with a 2700k when using 8 MB.  This was more than a year ago using scrypt jane, so maybe the code has been optimized. Also note that above you hash in the hundreds of hashes per second; I said that you would hit "double digits of H/s", making the difference one order of magnitude, not two.

Solid, within a year hashes will go from MH/s to double digit H/s  Roll Eyes  Network difficulty will drop like crazy and block reward will

You were off by a pretty large amount at both ends of the numbers you said hash rate would traverse over the course of a year.  However, with a more thorough calculation:

Best interpretation (most favorable to you) of your statement is that hash rates go from 1MH/sec to 99.999 H/sec over the course of the first year, a decrease in speed of about 10000x.  My benchmark showed a decrease of speed over the course of the first year of 358.77kH/sec to 0.606kH/sec with one particular common server CPU, a decrease in speed of about 592x.

10000 vs 592.  Or we could calculate that as 10000 / 592 = 16.9 and call that a single order of magnitude (plus change).  A strict interpretation of Wikipedia's "order of magnitude" article would suggest that this is the correct way.  So, I concede that you were off by a bit more than an order of magnitude if we analyze the most favorable interpretation of your statement.


I benched at around 100 ms/hash per thread with a 2700k when using 8 MB.  This was more than a year ago using scrypt jane, so maybe the code has been optimized.

Was the scrypt-jane library available somewhere prior to September 13, 2012?  If so, I'd like to snag a copy of that earlier version you benchmarked with.  Which hashing algorithm in the scrypt-jane library was that benchmark performed with?
Jump to: