Author

Topic: [ANN][CLAM] CLAMs, Proof-Of-Chain, Proof-Of-Working-Stake, a.k.a. "Clamcoin" - page 332. (Read 1151252 times)

legendary
Activity: 2940
Merit: 1333
Its a very clever concept. Essentially you were able to reduce the amount of work involved in the proof of stake hashing, while doing it in a manner that does not compromise solving blocks because the difficulty will adjust to make the statistics approximately correct.

I have always wondered why the CLAM difficulty chart moves in such smooth waves compared to other coins, I suppose this could have something to do with that. Although now I wouldn't be surprised if you clammers redesigned the difficulty adjustment code as well.

Three things:

1) When the time granularity changed from 1 second to 16 seconds, another accidental change went in at the same time which missed out a factor of 10^8 in the difficulty calculation. It made blocks 100 million times easier to solve, but gave us 16 times less chances to try, so the two factors combined made it 6.25 million times easier than before to stake. As a result everyone was staking every 16 seconds, everyone was orphaning everyone else for a few hours, the network difficulty adjusted upwards very quickly, and after 3 or 4 hours everything was back to normal. Is was kind of amazing to watch the difficulty adjustment code do its thing and deal with the accidental 10^8 error. Do you have long-term difficulty charts? If so you'll easily see the time I'm talking about. You need a log scale, or it will look like the difficulty was 0 before the steep rise.

Edit: I don't see how to adjust the date range on your chart, but http://blocktree.io/charts/CLAM shows this:



The difficulty wasn't really 0 before November 2014, it just looks that way when you don't use a log y-axis.

2) I don't think the CLAM project is responsible for the 16 second granularity idea or code. I think it is from one of the upstream projects. I've never paid attention to any of them, but I think it was dark coin, black coin, or something like that.

3) The difficulty adjustment code was written by me specifically for CLAM, and went live at the same time as the v2 protocol and the 10^8 error. It's not ideal how it waves up and down several times per day, but it's better than the previous system of difficulty adjustment, which was far too reactive to how long the last 1 block took to stake.
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
Its a very clever concept. Essentially you were able to reduce the amount of work involved in the proof of stake hashing, while doing it in a manner that does not compromise solving blocks because the difficulty will adjust to make the statistics approximately correct.

I have always wondered why the CLAM difficulty chart moves in such smooth waves compared to other coins, I suppose this could have something to do with that. Although now I wouldn't be surprised if you clammers redesigned the difficulty adjustment code as well.
legendary
Activity: 2940
Merit: 1333
Wow! Ok now that you say that I see the same 16 second mask in the check stake code. Now this all makes sense in the context dooglus put it in. This is a pretty cool way to hash for proof of stake. Now I know why I was so lost :/

The nSearchInterval variable used to (before v2 of the protocol) be set to how many seconds since the last time we tried hashing, and the hashing function would then iterate through all those missed seconds.

In v2 we don't want it doing that, so we set the variable to 1 so we get a single iteration of the loop.

The 16 second mask thing makes it 16 times easier to find a block than it was before, since we get to try 16 times less often, but still want the same frequency of blocks being solved. This makes it 16 times more likely that two peers will find a block at the same time, and so we end up with 16 times as many orphans as before. Other than that it's cool though.

Oh, and as for SuperClam's "dooglus can verify and expound on this; but", I don't need to. She pretty much nailed it.
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
Awesome work with the stake data chilly2k - we're happy to have you as part of the family Smiley
Deb calls it "the clamily".

Grin Grin Grin



Interesting about the mask, that is the part of the code I was not processing correctly. Ok so I think we have the wait 16 seconds part of it cleared.
No, because look at the lines before that:
Quote
   if (nSearchTime > nLastCoinStakeSearchTime)
    {
        int64_t nSearchInterval = IsProtocolV2(nBestHeight+1) ? 1 : nSearchTime - nLastCoinStakeSearchTime;
It only gets into that block if nSearchTime is greater than the last search time. And nSearchTime has already been rounded down to a multiple of 16 seconds. So it will only get into this block at most once per 16 seconds.
After 16 seconds have passed this code continues, and nSearchInterval becomes 1. I still can't follow exactly where it iterates 16 different timestamps in one hashing session? I might have to add a couple of log prints with timestamps into the code to visualize this better.

dooglus can verify and expound on this; but, the concept is that it doesn't iterate all 16 One second increment timestamps.  In fact, if you look at the timestamps of on-chain blocks (approved by network consensus) - all of them have 16 second interval timestamps.

CheckCoinStakeTimestamp() mandates that block timestamps must follow the 16 second mask.
You could hash all 16 seconds if you pleased, but peers would reject the block as a timestamp violation.

Wow! Ok now that you say that I see the same 16 second mask in the check stake code. Now this all makes sense in the context dooglus put it in. This is a pretty cool way to hash for proof of stake. Now I know why I was so lost :/
hero member
Activity: 784
Merit: 1002
CLAM Developer
Awesome work with the stake data chilly2k - we're happy to have you as part of the family Smiley
Deb calls it "the clamily".

Grin Grin Grin



Interesting about the mask, that is the part of the code I was not processing correctly. Ok so I think we have the wait 16 seconds part of it cleared.
No, because look at the lines before that:
Quote
   if (nSearchTime > nLastCoinStakeSearchTime)
    {
        int64_t nSearchInterval = IsProtocolV2(nBestHeight+1) ? 1 : nSearchTime - nLastCoinStakeSearchTime;
It only gets into that block if nSearchTime is greater than the last search time. And nSearchTime has already been rounded down to a multiple of 16 seconds. So it will only get into this block at most once per 16 seconds.
After 16 seconds have passed this code continues, and nSearchInterval becomes 1. I still can't follow exactly where it iterates 16 different timestamps in one hashing session? I might have to add a couple of log prints with timestamps into the code to visualize this better.

dooglus can verify and expound on this; but, the concept is that it doesn't iterate all 16 One second increment timestamps.  In fact, if you look at the timestamps of on-chain blocks (approved by network consensus) - all of them have 16 second interval timestamps.

CheckCoinStakeTimestamp() mandates that block timestamps must follow the 16 second mask.
You could hash all 16 seconds if you pleased, but peers would reject the block as a timestamp violation.
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
Interesting about the mask, that is the part of the code I was not processing correctly. Ok so I think we have the wait 16 seconds part of it cleared.



No, because look at the lines before that:

Quote
    if (nSearchTime > nLastCoinStakeSearchTime)
    {
        int64_t nSearchInterval = IsProtocolV2(nBestHeight+1) ? 1 : nSearchTime - nLastCoinStakeSearchTime;

It only gets into that block if nSearchTime is greater than the last search time. And nSearchTime has already been rounded down to a multiple of 16 seconds. So it will only get into this block at most once per 16 seconds.

After 16 seconds have passed this code continues, and nSearchInterval becomes 1. I still can't follow exactly where it iterates 16 different timestamps in one hashing session? I might have to add a couple of log prints with timestamps into the code to visualize this better.
legendary
Activity: 2940
Merit: 1333
Again, I may be picturing this incorrectly in my head, so let me explain my reading of the code and see if it lines up with yours.

Good idea. When you're specific like that it's easier to pinpoint where we're talking at cross purposes.

15 seconds is the future drift limit (which is I think what you are referring to when you say time granularity?).

No, I'm talking about kernel.h:

and the search interval is 1 second.

https://github.com/nochowderforyou/clams/blob/master/src/main.cpp#L2603
Code:
int64_t nSearchInterval = IsProtocolV2(nBestHeight+1) ? 1 : nSearchTime - nLastCoinStakeSearchTime;

No, because look at the lines before that:

I also can't find the relevant part of the code that tells it to pause looking for 16 seconds.

... and now you can! Smiley
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
The JD staking wallet checks something like 30k outputs in 4 seconds.

I believe this means that you are missing 3 variations of hashes for each output per every 4 seconds?

I don't understand what you're asking, sorry.

The CLAM protocol has a time granularity of 16 seconds. Every 16 seconds it checks each of its unspent outputs to see if it can stake, then sleeps until then next 16-second "tick" of the clock.

If it takes you more than 16 seconds to check all your unspent outputs then you'll be missing out on staking opportunities, because you'll be falling further and further behind. JD is able to do all its staking work in 4 seconds and have a 12 second snooze before the next opportunity comes up.

I don't understand the bit about "missing 3 variations of hashes". There's only one hash per output per 16 seconds.

Now if JD was able to check all of its outputs in 0.001 seconds then it would probably have a lower orphan rate, since it is kind of a race. Maybe that's what you're referring to?

Again, I may be picturing this incorrectly in my head, so let me explain my reading of the code and see if it lines up with yours.

15 seconds is the future drift limit (which is I think what you are referring to when you say time granularity?).

https://github.com/nochowderforyou/clams/blob/master/src/main.h#L74
Code:
inline int64_t FutureDriftV2(int64_t nTime) { return nTime + 15; }

and the search interval is 1 second.

https://github.com/nochowderforyou/clams/blob/master/src/main.cpp#L2603
Code:
int64_t nSearchInterval = IsProtocolV2(nBestHeight+1) ? 1 : nSearchTime - nLastCoinStakeSearchTime;

The stake hashing is iterated by the search interval not the future drift.
https://github.com/nochowderforyou/clams/blob/master/src/wallet.cpp#L2108
Code:
for (unsigned int n=0; n
The search interval appears to be one second after protocol 2, so that is why I am lead to the conclusion that if it takes 4 seconds to hash a set of outputs for one timestamp, then you have have skipped 3 timestamps that you could have tested. And of course this would only apply to very very large groups of outputs, for normal stakers this is a non issue.

I also can't find the relevant part of the code that tells it to pause looking for 16 seconds. I may be wrong, but I think there is no pause (a simple monitoring of the CPU consumption could give us a clue too). All I see is the normal minersleep parameter that is set at 500 milliseconds.

Again I could be totally wrong, and completely missing some of those parts of the code that you mention. The staking code is so all over the place that its hard to track down everything.



 
legendary
Activity: 2940
Merit: 1333
4 Seconds? Is that needed for finding a block each time?

That's how long it takes to search for staking opportunities each 16 seconds. Most times you check you find that there isn't any such opportunity. There's only one per minute globally, so even with 100% of the staking weight you'd have around a 1 in 4 chance of any particular search being successful. When we do find a block, it will be at a random point through that 4 second search, and so on average a successful search takes 2 seconds ((0 + 4) / 2).

Given a difficulty leading to 16 seconds these 4 seconds would be huge. I mean the difference to orphan blocks or get block orphaned. Or isnt it needed to calculate a block?

I don't think you're understanding still. "Difficulty leading to 16 seconds" isn't what's happening. The difficulty adjusts how hard it is to stake a block. It adjusts such that we find around one block per minute. But blocks can only be found when the time (in seconds since some date in 1970) is a multiple of 16. That only happens every 16 seconds. That's fixed by the protocol (until the developers change the protocol again, of course), and isn't related to the difficulty.

So the 4 seconds are real and you meant it goes through each of that outputs (shouldnt it be inputs as long as they arent sent out?) and checks if it finds a hash? Does the output amount matter here? I mean you described rounding the amount of clams down to an integer. Does this apply to the address these outputs are on, so a big amount of clams or only to the single output? If the latter then one could get an advantage by sending the clams in amounts of 1 to a new address. The chance to find a block would be maximized?

It's a real 4 seconds. 4 seconds out of every 16 seconds the CPU on one core of JD's staking wallet server is pegged at 100%. They're outputs of the transactions that created them. They're not the inputs of any transactions yet, or they wouldn't be unspent. They're potential inputs, if you like, but actual outputs. When they stake they become inputs of the staking transaction.

The rounding down to an integer was related to how the age of an output affected its staking power in an older version of CLAM. I think it used to multiply the value by the age and round down to an integer. I don't think it does that rounding any more, or the multiplication by the age. These days the staking power (called the "weight") is just the same as the value in CLAMs. Each output is considered separately. It doesn't matter if you have lots of 1 CLAMs outputs on a single address, or in lots of different addresses. They each get their own individual chance of staking, with a probability proportional to their own individual value in CLAMs.

There is a benefit to splitting your outputs up into several smaller outputs. Suppose you have 1000 CLAMs. It will stake very quickly, and become 1001 CLAMs. But then it will take 8 hours to mature before it can stake again. The best you could hope for it that it will stake 3 times per day (since that's how many 8 hour maturation periods you can fit into a day).

If instead you split it into 1000 outputs of size 1, each one tries to stake independently. Each one has a 1000 times lower chance of staking than the 1000 CLAM output did, but there are 1000 of them, so it takes roughly the same time for one of them to stake, and turn from 1 CLAM to 2 CLAMs. Then, however, only the 2 CLAM output is frozen for 8 hours while it matures. The other 999 CLAMs continue trying to stake. So you have saved yourself an 8 hour wait for 99.9% of your value.

If you split your value up into *too* many outputs, you'll have too much hashing to do every 16 seconds that you won't be able to get through it all. And if you ever want to spent your outputs, having them split up into millions of tiny pieces makes the transaction which spends them very big (and so very expensive in tx fees).

So there's a tradeoff - split enough, but not too much.
legendary
Activity: 2940
Merit: 1333
Awesome work with the stake data chilly2k - we're happy to have you as part of the family Smiley

Deb calls it "the clamily".
legendary
Activity: 2940
Merit: 1333
The JD staking wallet checks something like 30k outputs in 4 seconds.

I believe this means that you are missing 3 variations of hashes for each output per every 4 seconds?

I don't understand what you're asking, sorry.

The CLAM protocol has a time granularity of 16 seconds. Every 16 seconds it checks each of its unspent outputs to see if it can stake, then sleeps until then next 16-second "tick" of the clock.

If it takes you more than 16 seconds to check all your unspent outputs then you'll be missing out on staking opportunities, because you'll be falling further and further behind. JD is able to do all its staking work in 4 seconds and have a 12 second snooze before the next opportunity comes up.

I don't understand the bit about "missing 3 variations of hashes". There's only one hash per output per 16 seconds.

Now if JD was able to check all of its outputs in 0.001 seconds then it would probably have a lower orphan rate, since it is kind of a race. Maybe that's what you're referring to?
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
The JD staking wallet checks something like 30k outputs in 4 seconds.


I believe this means that you are missing 3 variations of hashes for each output per every 4 seconds?
hero member
Activity: 784
Merit: 1002
CLAM Developer
     Today is my 1 year Clammaversary.  One your ago today I got my first Clam stake.    This was pre-lottery and pre-fixed reward.  It was for a whopping 0.00657097 clams. 
      Thanks to all for make the past year an enjoyable one, and wishing all the best for the next year.   

Awesome work with the stake data chilly2k - we're happy to have you as part of the family Smiley
legendary
Activity: 1007
Merit: 1000

     Today is my 1 year Clammaversary.  One year ago today I got my first Clam stake.    This was pre-lottery and pre-fixed reward.  It was for a whopping 0.00657097 clams.  

      Thanks to all for make the past year an enjoyable one, and wishing all the best for the next year.  
legendary
Activity: 1007
Merit: 1000

So to effectively stake I should invest my CLAMs at Just-Dice? Is there any real downside to that?
I had thought they would constantly stake, but I only have my computer on for a few hours a day, so guess it isn't going to be worth it in the clam wallet.

I updated to the new wallet and now it says I should stake in 1 day! A big difference!

If you invest at Just-dice you will make more money, but you will be contributing to CLAM being centralized.

   I had to see how this played out. 

from 6/22 - 6/29 I staked 41.0029 clams  the starting size was about 2450

So 7 day at an average of 5.8576 per day.  Would give .00239 %

I always see .002% just staking on JD.  which would be 4.9 clams per day. 

So in my case staking alone is much better an extra 6.7 clams in 7 days.  or 350 clams a year. 

This was a small sample and your mileage may vary.  But it does show that staking solo can be done and can compete with JD. 

There were 2 orphans in the sample.   And I believe one was when I was staking a new block of 250 clams and splitting the output into blocks of 5.

I think that took 2 tries. 


Try this again. 

Dates                     Start Balance     Stakes      Avg/Day   Percentage     Orphans

6/22- 6/28 (7 days)  2456                 37.0027     5.2861     2.15%           2
6/29 -7/5                2493                 26.0005     3.7144     1.49%           2


   I'll try to keep this up for another 2 weeks, to see how variability comes into play.   My stakes have been all over the board from a low of 2 per day to 8 per day.  So over times we should see how it levels out. 


legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
*lol* Sounds like a currency that rewards you for NOT using it as a currency. If i would have read it without knowing clams i would think this is a scamcoin and the creators did this to become rich. Tongue

Maybe you misunderstood. Staking destroyed age, setting it back to zero. So it's not like old coins keep getting old and staking at the same time. You choose: stake, or gather age. The idea seemed to be that you could load up your wallet once a month and collect all the built up staking that had accumulated due to the age. But when staking is meant to secure the network surely you don't want people only doing it once a month. Hence the abolishment of 'age'...

Oh, that makes more sense then.


Thanks for the explaination. So the sets of parameters for each second are fixed. You cant try different hashes in one second. Or 16 seconds, now. Would one have an advantage when calculating hashes in advance? I guess so since finding hashes for the same second should be hard. I mean where do you take the second from? Your second can be some seconds after the seconds of other miners, so you would be late all time, not finding a block ever.

All the inputs are fixed. You can't increase your chances of finding a 'good' hash by throwing more CPU at it. You could calculate hashes somewhat in advance (though I think some of the inputs depend on recent blocks, so not too far in advance) but hashing doesn't take long anyway. The JD staking wallet checks something like 30k outputs in 4 seconds.

4 Seconds? Is that needed for finding a block each time? Given a difficulty leading to 16 seconds these 4 seconds would be huge. I mean the difference to orphan blocks or get block orphaned. Or isnt it needed to calculate a block?


src/kernel.cpp says this:

Code:
// Stake Modifier (hash modifier of proof-of-stake):
// The purpose of stake modifier is to prevent a txout (coin) owner from
// computing future proof-of-stake generated by this txout at the time
// of transaction confirmation. To meet kernel protocol, the txout
// must hash with a future stake modifier to generate the proof.
// Stake modifier consists of bits each of which is contributed from a
// selected block of a given block group in the past.
// The selection of a block is based on a hash of the block's proof-hash and
// the previous stake modifier.
// Stake modifier is recomputed at a fixed time interval instead of every
// block. This is to make it difficult for an attacker to gain control of
// additional bits in the stake modifier, even after generating a chain of
// blocks.

and this:

Code:
// ppcoin kernel protocol
// coinstake must meet hash target according to the protocol:
// kernel (input 0) must meet the formula
//     hash(nStakeModifier + txPrev.block.nTime + txPrev.offset + txPrev.nTime + txPrev.vout.n + nTime) < bnTarget * nCoinDayWeight
// this ensures that the chance of getting a coinstake is proportional to the
// amount of coin age one owns.
// The reason this hash is chosen is the following:
//   nStakeModifier: scrambles computation to make it very difficult to precompute
//                  future proof-of-stake at the time of the coin's confirmation
//   txPrev.block.nTime: prevent nodes from guessing a good timestamp to
//                       generate transaction for future advantage
//   txPrev.offset: offset of txPrev inside block, to reduce the chance of
//                  nodes generating coinstake at the same time
//   txPrev.nTime: reduce the chance of nodes generating coinstake at the same
//                 time
//   txPrev.vout.n: output number of txPrev, to reduce the chance of nodes
//                  generating coinstake at the same time
//   block/tx hash should not be used here as they can be generated in vast
//   quantities so as to generate blocks faster, degrading the system back into
//   a proof-of-work situation.

I've never tried to understand it fully, but it looks like the author went to some lengths to make it hard or impossible to game.

And you said the wallet checks each of your addresses. Does this mean you can have different addresses in your wallet and you could try out each one if it creates a correct hash?

I did? I didn't mean to. It checks each *output*. You can have multiple unspent outputs per address. JD keeps most of its value split into 30k separate outputs at a single address. The wallet loops through all the unspent outputs it controls, does a hash for each of them, trying to find one that hashes low enough to stake a block.

Then i misunderstood your sentence.

So the 4 seconds are real and you meant it goes through each of that outputs (shouldnt it be inputs as long as they arent sent out?) and checks if it finds a hash? Does the output amount matter here? I mean you described rounding the amount of clams down to an integer. Does this apply to the address these outputs are on, so a big amount of clams or only to the single output? If the latter then one could get an advantage by sending the clams in amounts of 1 to a new address. The chance to find a block would be maximized?
legendary
Activity: 2940
Merit: 1333
*lol* Sounds like a currency that rewards you for NOT using it as a currency. If i would have read it without knowing clams i would think this is a scamcoin and the creators did this to become rich. Tongue

Maybe you misunderstood. Staking destroyed age, setting it back to zero. So it's not like old coins keep getting old and staking at the same time. You choose: stake, or gather age. The idea seemed to be that you could load up your wallet once a month and collect all the built up staking that had accumulated due to the age. But when staking is meant to secure the network surely you don't want people only doing it once a month. Hence the abolishment of 'age'...

Thanks for the explaination. So the sets of parameters for each second are fixed. You cant try different hashes in one second. Or 16 seconds, now. Would one have an advantage when calculating hashes in advance? I guess so since finding hashes for the same second should be hard. I mean where do you take the second from? Your second can be some seconds after the seconds of other miners, so you would be late all time, not finding a block ever.

All the inputs are fixed. You can't increase your chances of finding a 'good' hash by throwing more CPU at it. You could calculate hashes somewhat in advance (though I think some of the inputs depend on recent blocks, so not too far in advance) but hashing doesn't take long anyway. The JD staking wallet checks something like 30k outputs in 4 seconds.

src/kernel.cpp says this:

Code:
// Stake Modifier (hash modifier of proof-of-stake):
// The purpose of stake modifier is to prevent a txout (coin) owner from
// computing future proof-of-stake generated by this txout at the time
// of transaction confirmation. To meet kernel protocol, the txout
// must hash with a future stake modifier to generate the proof.
// Stake modifier consists of bits each of which is contributed from a
// selected block of a given block group in the past.
// The selection of a block is based on a hash of the block's proof-hash and
// the previous stake modifier.
// Stake modifier is recomputed at a fixed time interval instead of every
// block. This is to make it difficult for an attacker to gain control of
// additional bits in the stake modifier, even after generating a chain of
// blocks.

and this:

Code:
// ppcoin kernel protocol
// coinstake must meet hash target according to the protocol:
// kernel (input 0) must meet the formula
//     hash(nStakeModifier + txPrev.block.nTime + txPrev.offset + txPrev.nTime + txPrev.vout.n + nTime) < bnTarget * nCoinDayWeight
// this ensures that the chance of getting a coinstake is proportional to the
// amount of coin age one owns.
// The reason this hash is chosen is the following:
//   nStakeModifier: scrambles computation to make it very difficult to precompute
//                  future proof-of-stake at the time of the coin's confirmation
//   txPrev.block.nTime: prevent nodes from guessing a good timestamp to
//                       generate transaction for future advantage
//   txPrev.offset: offset of txPrev inside block, to reduce the chance of
//                  nodes generating coinstake at the same time
//   txPrev.nTime: reduce the chance of nodes generating coinstake at the same
//                 time
//   txPrev.vout.n: output number of txPrev, to reduce the chance of nodes
//                  generating coinstake at the same time
//   block/tx hash should not be used here as they can be generated in vast
//   quantities so as to generate blocks faster, degrading the system back into
//   a proof-of-work situation.

I've never tried to understand it fully, but it looks like the author went to some lengths to make it hard or impossible to game.

And you said the wallet checks each of your addresses. Does this mean you can have different addresses in your wallet and you could try out each one if it creates a correct hash?

I did? I didn't mean to. It checks each *output*. You can have multiple unspent outputs per address. JD keeps most of its value split into 30k separate outputs at a single address. The wallet loops through all the unspent outputs it controls, does a hash for each of them, trying to find one that hashes low enough to stake a block.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
Though i wonder how blocks are found at all. It sounds like calculating like with bitcoin but its not the case. I heard its random but that doesnt account for how orphaned blocks can happen then. You dont need to explain when you think i should investigate myself. Wink

It's not particularly easy to find out.

When I first discovered CLAM, I couldn't find out how staking worked, so I read the source code and summarised it like this:

you work out the "clam days" of your outputs. If you have 0.13 CLAM that hasn't moved for 3 days, you have 0.39 "clam days" (multiply value by age). I think that 0.39 is then rounded down to an integer, which might be your problem, since it will go to 0 for you. But suppose you had 13 CLAM that hadn't moved for 3 days.  That's 39 clam days. That gets multiplied by about 4000 (depending on the current difficulty). So you get 39*4000 = 156,000. Then every second your client hashes a bunch of stuff and gets an effectively random number between 0 and 4.3 billion. If the number is less than your 156,000 then you get to stake. 156,000 is about 27,500 times smaller than 4.3 billion, so you get to stake about once per 27,500 seconds (458 minutes, 7.5 hours).

So 13 CLAM that's 3 days old stakes every 7.5 hours or so. As it gets older, its chance of staking increases.

And I think your 0.13 CLAM needs to be 1/0.13 = 7.7 days old before it gets over 1 "clam day", and so even has a chance of staking, though I might be wrong on that point.

*lol* Sounds like a currency that rewards you for NOT using it as a currency. If i would have read it without knowing clams i would think this is a scamcoin and the creators did this to become rich. Tongue

It has changed since then. There is no longer the concept of "age" - the weight of an output is simply its size (once it has matured and not been involved in a transaction for 4 hours). And the "every second" changed to "every 16 seconds". And the difficulty became about a million times easier.

But basically, every 16 seconds your client looks at each of your unspent outputs, finds the ones which are mature and haven't moved in the last 4 hours, and hashes a bunch of information together (including the current time, the txid, etc.). If the hash is smaller than the current network-wide target times the value of the output then that output gets to stake a block. Multiplying the target by the size of the output makes the ease of staking proportional to the size of the output.

It's "random" in the same way that Just-Dice rolls are random: it isn't, but it appears to be due to the nature of hashing. Imagine hashing "abc123"+current_time_in_seconds with sha256 until you got a number less than a million as the hash result. There's a particular time in the future when that will happen, but without trying it for every current_time_in_seconds you can't predict when it will happen.

Thanks for the explaination. So the sets of parameters for each second are fixed. You cant try different hashes in one second. Or 16 seconds, now. Would one have an advantage when calculating hashes in advance? I guess so since finding hashes for the same second should be hard. I mean where do you take the second from? Your second can be some seconds after the seconds of other miners, so you would be late all time, not finding a block ever.

And you said the wallet checks each of your addresses. Does this mean you can have different addresses in your wallet and you could try out each one if it creates a correct hash?
legendary
Activity: 2940
Merit: 1333
I really hope their update isn't done.

Yeah, you don't just leave dead links like that. I'm sure it's just an oversight.
legendary
Activity: 4004
Merit: 1250
Owner at AltQuick.com

I would bet they made a partnership with Shapeshifter.io or one of the guys who is involved with Shapeshift because Altquicks new coins are exactly what Shapeshift have listed.


Confirmation on my suspicion.

https://twitter.com/BayAreaCoins/status/617480509359132672
Jump to: