Author

Topic: [ANN][CLAM] CLAMs, Proof-Of-Chain, Proof-Of-Working-Stake, a.k.a. "Clamcoin" - page 336. (Read 1151252 times)

legendary
Activity: 2940
Merit: 1333
One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.

If the blk0001.dat file was OK, but the txleveldb/ files contained some corruption then -rescan wouldn't see the corruption. As I understand it, -rescan recreates the txleveldb/ files from the blk0001.dat file.

I just tried running the Windows binary of v1.4.12 but didn't have much luck. I put the bootstrap.dat file in place and it started syncing, but then this happens after it has loaded a few blocks:



Is that something that happens a lot on Windows still? It's part of why I stopped using Windows in the first place.

I guess I'll try syncing from bootstrap.dat on Linux a few times and see how well that works.

Edit: I'm running 4 clamd processes, all syncing from bootstrap.dat into different datadirs. None of them have "encountered a problem" yet:

Quote
==> /home/clam/.clam.bs3/debug.log <==
SetBestChain: new best=9109e501431fb62e64dfce26112a22703e5feecb56501232bd4a5e8bf09c63a2  height=12200  trust=14866939091  blocktrust=1130602  date=05/28/14 02:23:26
==> /home/clam/.clam.bs1/debug.log <==
SetBestChain: new best=10a98d283d1952cf2ef5cf073faf1b9ec955f785511634fc202b5af0e9ed4bff  height=19200  trust=22450387823  blocktrust=1048577  date=06/12/14 02:06:00
==> /home/clam/.clam.bs2/debug.log <==
SetBestChain: new best=2c47724a65e02a69a103f0bbf5f7e15b17e7915b92052e529b554226972edf4f  height=14800  trust=17681301681  blocktrust=1157520  date=06/02/14 15:45:53
==> /home/clam/.clam.bs4/debug.log <==
SetBestChain: new best=28c04885cd701375d847853b814decb4bf1ef346f8b996703834f1cd0c6f080d  height=10400  trust=12685082747  blocktrust=1231051  date=05/25/14 07:15:07

I'm surprised to see that the processes are CPU bound:

Quote
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                               
16265 clam      20   0  817072  29660  11732 S  95.0  0.4   7:41.56 clamd                                                                 
16295 clam      20   0  823492  32428  11340 S  93.4  0.4   6:20.31 clamd                                                                 
16187 clam      20   0  815872  34408  11760 S  88.4  0.4   9:49.87 clamd                                                                 
16226 clam      20   0  837548  44116  26332 S  80.1  0.5   8:37.20 clamd                                                                 

I would expect the 4 of them all reading from bootstrap.dat and writing to database files to hit an hdd bottleneck, but no.
legendary
Activity: 1007
Merit: 1000
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct.

That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail.

I no longer have the corrupt files as I couldn't wait around indefinitely.

It would be handy if there was an RPC command to forget every block past a specified height (or hardened checkpoint). Or even just be able to delete one specific block.

   Along these lines, I seem to be getting a lot of clients connecting to me, that are not up to sync.  They end up killing my bandwidth.  I have to stop/start the client. 
Last time I checked it was 8 out of 40 connections were not even close to the current block height.    I'm going to start writing down the connections and initial block height before I recycle, and see if there is a pattern.

   Otherwise is there a setting to limit the number of new connections that require syncing?   
legendary
Activity: 1512
Merit: 1057
SpacePirate.io
Clam friends! ShapeShift has re-added Clams to our instant altcoin exchange! So for the downtime. Instantly exchange Clams with 35+ altcoins today with ShapeShift.io


Shows as "Unknown exchange pair" this morning.... run out of clam?
legendary
Activity: 2268
Merit: 1092
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct.

That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail.

One thing that just popped into my head. Wouldn't a -rescan reject the previous corrupt block? That's the first thing I tried after a couple of restarts.
legendary
Activity: 2268
Merit: 1092
So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct.

That's something I hadn't considered. So a block which is verified as valid is corrupted as it's being committed to storage... from that point, verification of the following block, which arrives in perfect condition from peers, will always fail.

I no longer have the corrupt files as I couldn't wait around indefinitely.

It would be handy if there was an RPC command to forget every block past a specified height (or hardened checkpoint). Or even just be able to delete one specific block.
legendary
Activity: 2940
Merit: 1333
Still having problems with ORPHAN BLOCK 751 Sad

I think I've figured out what has happened this time. The client accepted block 516305, but seems to have completely ignored block 516306... and only that block. Subsequent blocks were received, but have been marked as orphans, since there is the gap between 516305 and 516307.

I ended up restarting with a virgin db and a bootstrap.dat, and the same thing happened again, but almost immediately...

ProcessBlock: ORPHAN BLOCK 394, prev=150bbc3b26c30450c72982a4ddd76849f27772f789d4b5547cae84deb35c6916
SetBestChain: new best=ee238d43767cae1369976b95d725a0fefcb43c38eb472fab85bb4518ced9f5e8  height=56532  trust=181383289202  blocktrust=1783141  date=08/05/14 12:33:40
ProcessBlock: ACCEPTED
ProcessBlock: ORPHAN BLOCK 395, prev=1f04a53561e711ed5db2bd3d161a038aa7f537ec203cac8097dbcbe71ae60d21
ProcessBlock: ORPHAN BLOCK 396, prev=0c86aef87def03c22d332206bfbc3076b7205d24fe96462ce46fd1cdbecff6cc
ProcessBlock: ORPHAN BLOCK 397, prev=d7ab32165d83bbdb17706272736e8bffff63dfcd85e497fd30900137d247e43e
ERROR: CheckProofOfStake() : VerifySignature failed on coinstake 010c7653afd5e0d62dee0b077285749ac5e6bbd2e35a6091c333dee69778d4b9


Client is stuck at block 56532, rejected transaction 010c7653afd5e0d62dee0b077285749ac5e6bbd2e35a6091c333dee69778d4b9 is from block 56533, so it never accepts the block.

Given that it's not failing at a consistent spot I'm starting to wonder if there's some obscure bug with my OS/hardware (it's an Odroid C1/ARM CPU so probably not so well tested) or maybe a hardware issue.

It's important to note that transaction 010c7653afd5e0d62dee0b077285749ac5e6bbd2e35a6091c333dee69778d4b9 is repeatedly rejected (1679 times in this instance). Is this cached locally, or is it rejecting a "fresh" block from a peer each time? This could help determine whether it's a random bit flip that gets committed to disk, dooming that block forever, or there's some other reason it is repeatedly failing verification.

Did you ever get this resolved?

The code that produces that error message is the following:

Quote
    if (!txPrev.ReadFromDisk(txdb, txin.prevout, txindex))
        return tx.DoS(1, error("CheckProofOfStake() : INFO: read txPrev failed"));  // previous transaction not in main chain, may occur during initial download

    // Verify signature
    if (!VerifySignature(txPrev, tx, 0, SCRIPT_VERIFY_NONE, 0, 0, 0))
        return tx.DoS(100, error("CheckProofOfStake() : VerifySignature failed on coinstake %s", tx.GetHash().ToString()));

ie. to verify the signature on the new block, it loads information about the transaction which created the input to the new block, and checks the signature on the new block against the script specified in that old transaction.

So even though you've been sent the new block 1679 times, it's loading the same information about the previous transaction from the disk each time. I'm guessing that it's this previous transaction that is corrupted, and so it doesn't matter how many times you are resent the new transaction, it's never going to be correct.

This is the transaction you're failing to verify the signature for.

This is the previous transaction - the one that created the 4.70xxx CLAM output that is staking in the rejected block. So I'd like to see the output of:

> getrawtransaction d4fbd093e9f895a13ccdae9ce4c050d823ed1896f5bbec088a61f999d2d3e7fc

as well as the output of:

> getrawtransaction 010c7653afd5e0d62dee0b077285749ac5e6bbd2e35a6091c333dee69778d4b9

I'll compare them with what I see, which might tell me what's going wrong.
legendary
Activity: 2940
Merit: 1333
Yup there's invisible sell pressure because you can earn interest on JD. And a good thing about CLAM is that transactions confirm quick.

Also a reason that buy walls are lopsided is because someone was market making and offering liquidity on both sides of the book. I wouldn't think that he put all his CLAMS for sale though. Maybe he has more clams to add higher up.

Also once his walls were sold into all the way down past 0.005 and there was a looming sell wall.

I was market-making for a long time. I started off buying at 0.005 and selling at 0.006. When the price got stuck around 0.0054 I dropped my selling price to 0.0055, and kept trading between 0.005 and 0.0055 for a few weeks. Eventually the price broke out of that range and I just let it go. There's no point trying to hold the price in a range it doesn't want to be in. I guess eventually the price will settle down again, and I can start putting up walls in a narrow range again, but I'd like to see where that is first.

I'm guessing a lot of people are waiting to see what happens re. Scabby's 185 BTC buy-in on Friday.
hero member
Activity: 756
Merit: 500
Yup there's invisible sell pressure because you can earn interest on JD. And a good thing about CLAM is that transactions confirm quick.

Also a reason that buy walls are lopsided is because someone was market making and offering liquidity on both sides of the book. I wouldn't think that he put all his CLAMS for sale though. Maybe he has more clams to add higher up.

Also once his walls were sold into all the way down past 0.005 and there was a looming sell wall.
legendary
Activity: 2940
Merit: 1333
The CLAM market is difficult as so much volume occurs "out-of-band"/Over-The-Counter(OTC).

True that. Lots of large trades happen on an ad hoc basis in the Just-Dice chat tab. Understandably, some people don't want to leave large amounts of CLAM on the exchanges where they lose out on the opportunity to profit from staking, and so there's little apparently sell pressure, making the market depth chart look unbalanced. I suspect that there is a lot of "invisible" sell pressure waiting for buy orders.
hero member
Activity: 784
Merit: 1002
CLAM Developer
...
Edit: Also speaking of exchanges, I have been messing around with a 'master' depth chart. Damn clams has good buy support.


The CLAM market is difficult as so much volume occurs "out-of-band"/Over-The-Counter(OTC).

EDIT:
Awesome, btw Grin
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
I'm not familiar with PPC, but I expect it's the same as CLAM in this respect. The client doesn't attempt to merge outputs belonging to different keys, but it will accept it when others do so. So you can unilaterally modify your client to do it, and your peers won't mind.

Poloniex don't stake their wallet. I've made large deposits and watched them site idle for days, when they would almost certainly have staked many times in that period if the wallet was being staked.

I don't know why they don't. If they staked their wallets and gave a share of the rewards to whoever holds the CLAM balances, that would give them a competitive advantage over the other altcoin exchanges. They seem to be losing ground to cryptsy according to coinmarketcap.com/currencies/clams/#markets - when I looked a couple of weeks ago poloniex had 98% of the recent volume, with bittrex and cryptsy having less than 1% each. Now they're down to 72% which cryptsy at 18%. And not staking is really just throwing away free money.

Yes standard PPC is as you expected. Honestly the only real difference I find between blackcoins (and therefore clams) staking kernel is that weight is equal to the size of the utxo being offered for coinstake.

Concerning exchanges staking. Cryptsy admitted to it a while back and got quite a bit of backlash. They claim they don't anymore, but a few very good blockchain detectives think that there is sufficient proof that they do in fact stake their coins. If an exchange were to credit their users for such staking I would imagine most of the backlash would disappear. For smaller coins, it can create a problem having too much staking from one node, I do know that clams seems not to have much issue with that one though Wink

Edit: Also speaking of exchanges, I have been messing around with a 'master' depth chart. Damn clams has good buy support.
full member
Activity: 132
Merit: 100
willmathforcrypto.com
Just to confirm: I've installed 1.4.12 and now signrawtransaction works as I expected.
legendary
Activity: 2940
Merit: 1333
The difference between your implementation and standard PPC is that you add the ability to combine outputs from multiple keys held by the same wallet? Interesting idea, and of course you will have to own those keys in order to sign the transaction over. An exchange would then have to stake their wallets in order to take advantage of this. Some exchanges have been known to stake, as far as I know poloniex does not stake their wallet.

Your doing some really interesting work, keep it up Smiley

I'm not familiar with PPC, but I expect it's the same as CLAM in this respect. The client doesn't attempt to merge outputs belonging to different keys, but it will accept it when others do so. So you can unilaterally modify your client to do it, and your peers won't mind.

Poloniex don't stake their wallet. I've made large deposits and watched them site idle for days, when they would almost certainly have staked many times in that period if the wallet was being staked.

I don't know why they don't. If they staked their wallets and gave a share of the rewards to whoever holds the CLAM balances, that would give them a competitive advantage over the other altcoin exchanges. They seem to be losing ground to cryptsy according to coinmarketcap.com/currencies/clams/#markets - when I looked a couple of weeks ago poloniex had 98% of the recent volume, with bittrex and cryptsy having less than 1% each. Now they're down to 72% which cryptsy at 18%. And not staking is really just throwing away free money.
legendary
Activity: 1330
Merit: 1000
Blockchain Developer
The only argument I can think of would involve the bloat or lack of tx fees.
Might be reasonable argument for some sanity limits.

That said, I think it would also incentivize services to stake.

But it's removing bloat, in the form of unspent transaction outputs. It leaves a bunch of spent dust, which presumably in the future will be prunable. Unspent outputs sticking around forever aren't prunable.

There should be fees for creating dust, not for tidying it up. Smiley

Edit: as for the lack of fees, fees are paid to whoever stakes the block - so I'd be paying myself...

The difference between your implementation and standard PPC is that you add the ability to combine outputs from multiple keys held by the same wallet? Interesting idea, and of course you will have to own those keys in order to sign the transaction over. An exchange would then have to stake their wallets in order to take advantage of this. Some exchanges have been known to stake, as far as I know poloniex does not stake their wallet.

Your doing some really interesting work, keep it up Smiley
hero member
Activity: 784
Merit: 1002
CLAM Developer
The only argument I can think of would involve the bloat or lack of tx fees.
Might be reasonable argument for some sanity limits.
That said, I think it would also incentivize services to stake.
But it's removing bloat, in the form of unspent transaction outputs. It leaves a bunch of spent dust, which presumably in the future will be prunable. Unspent outputs sticking around forever aren't prunable.
There should be fees for creating dust, not for tidying it up. Smiley
Edit: as for the lack of fees, fees are paid to whoever stakes the block - so I'd be paying myself...

The pruning argument is definitely persuasive.
legendary
Activity: 2940
Merit: 1333
The only argument I can think of would involve the bloat or lack of tx fees.
Might be reasonable argument for some sanity limits.

That said, I think it would also incentivize services to stake.

But it's removing bloat, in the form of unspent transaction outputs. It leaves a bunch of spent dust, which presumably in the future will be prunable. Unspent outputs sticking around forever aren't prunable.

There should be fees for creating dust, not for tidying it up. Smiley

Edit: as for the lack of fees, fees are paid to whoever stakes the block - so I'd be paying myself...
hero member
Activity: 784
Merit: 1002
CLAM Developer
One thing that seriously has me bothered is the address that is sending outputs on coinstake transactions... I will need to perhaps reindex my db to deal with that.
I forgot to address that.
I created those weird staking transactions yesterday as an experiment.
[...]
So you won't find too many of those weird transactions in the blockchain so far
I just noticed I left my experimental staking code running since I posted that. It has created thousands of ugly multi-input multi-output staking transactions. So much so that the address I was using jumped to #4 in the CLAM rich-list. Ugh. I've now begun merging all those small outputs back to the main JD staking address, a bit at a time:
http://khashier.com/tx/77dd83f67379d7fd01b9665b5bd287e1e6dc63b07361b07279e2beb9ec2af134
Edit: but this whole experiment / mistake gave me an idea...
A few exchanges recently have been having problems with their CLAM wallets, where they fill up with 'dust' from faucets which costs more in transaction fees to spend than it is worth. That's not a good situation - the exchange either has to pay out of pocket to clean up the dust, or leave it polluting the blockchain in the form of unspent outputs forever.
So how about if we add an option to allow the wallet to combine all the dust into a single new output each time it stakes?
I wrote it up here: https://github.com/nochowderforyou/clams/issues/196 and pushed a commit implementing it to the repository. It'll be in the next CLAM client release unless there's a good reason that it shouldn't be.
Here it is cleaning up dust from the Just-Dice hot wallet, and getting paid to do so:


The only argument I can think of would involve the bloat or lack of tx fees.
Might be reasonable argument for some sanity limits.

That said, I think it would also incentivize services to stake.
legendary
Activity: 2940
Merit: 1333
One thing that seriously has me bothered is the address that is sending outputs on coinstake transactions... I will need to perhaps reindex my db to deal with that.

I forgot to address that.

I created those weird staking transactions yesterday as an experiment.

[...]

So you won't find too many of those weird transactions in the blockchain so far

I just noticed I left my experimental staking code running since I posted that. It has created thousands of ugly multi-input multi-output staking transactions. So much so that the address I was using jumped to #4 in the CLAM rich-list. Ugh. I've now begun merging all those small outputs back to the main JD staking address, a bit at a time:

http://khashier.com/tx/77dd83f67379d7fd01b9665b5bd287e1e6dc63b07361b07279e2beb9ec2af134

Edit: but this whole experiment / mistake gave me an idea...

A few exchanges recently have been having problems with their CLAM wallets, where they fill up with 'dust' from faucets which costs more in transaction fees to spend than it is worth. That's not a good situation - the exchange either has to pay out of pocket to clean up the dust, or leave it polluting the blockchain in the form of unspent outputs forever.

So how about if we add an option to allow the wallet to combine all the dust into a single new output each time it stakes?

I wrote it up here: https://github.com/nochowderforyou/clams/issues/196 and pushed a commit implementing it to the repository. It'll be in the next CLAM client release unless there's a good reason that it shouldn't be.

Here it is cleaning up dust from the Just-Dice hot wallet, and getting paid to do so:



Edit: here's a better example: http://khashier.com/tx/1f879d7896af66e4d533e49d9b49bc7d10f1d0e9b792aa5bcf85c5bda608ebf8
Jump to: