It was the Bitcointalk forum that inspired us to create Bitcointalksearch.org - Bitcointalk is an excellent site that should be the default page for anybody dealing in cryptocurrency,
since it is a virtual gold-mine of data. However, our experience and user feedback led us create our site;
Bitcointalk's search is slow, and difficult to get the results you need, because you need to log in first to find anything useful - furthermore, there are rate limiters for their search functionality.
The aim of our project is to create a faster website that yields more results and faster without having to create an account and eliminate the need to log in -
your personal data, therefore, will never be in jeopardy since we are not asking for any of your data and you don't need to provide them to use our site with all of its capabilities.
We created this website with the sole purpose of users being able to search quickly and efficiently in the field of cryptocurrency
so they will have access to the latest and most accurate information and thereby assisting the crypto-community at large.
Real tired and read over the last few posts... so going to try and make sense here....
So that TXID I was seeing is the TXID of the send I'm attempting to do, correct? Thats some poor code then because the way it reads is that THAT is the txid was already spent. I screwed with the code trying to figure this out, and added 'prevout.hash.ToString().c_str()' to that line thats printed to the debug.log, ran a createrawtransaction using a TXID that absolutely, for sure, was reported in listunspent, it failed and bombed out, and in the debug.log, the new prevout.hash.ToString().c_str() I added to the code reported the "bad" txid that it's saying is already spent.
So I tried this with a few other CPU hogging wallets. Two others I had the same issue. It bombed out, reporting an input was already spent when listunspent was reporting it was available.
One of the coins I tried it on (cornerstonecoin CCX) had not flushed it's debug.log in ages (file size was 200+ megs), and I ran my code against it. First try collecting 876 inputs into a 100kb transaction failed. So I kept halving it, getting successes, then halving again on failures, until I had a 4 input transaction that failed, and upon failure, I had the script dump me the txids of the transactions it was trying to use.
I then grep'ed the debug.log for them. One was not in there. Two were the same txid, with different vouts, and the 3rd was unique. So I had 2 left. Both hit in there with this line (different txids obviously)
WalletUpdateSpent found spent coin 43.974794CCX fc1e5b508d66bc4c7f964ff0df195ed949d6a43e82b795b78a3dff195a6d30e2
Which is exactly the line that appears when I successfully SEND using sendrawtransaction. (Or any send for that matter).
Unfortunately they are not timestamped so I cannot see WHEN that was sent, but it appears that listunspent is for sure showing me inputs that HAVE already been spent. Unfortunately I've already restarted the CCX wallet and it wiped the debug log, so I cant estimate how far back the inputs had been spent. I'm now trying to restart litecoinplus with -rescan to see if that transaction that was hanging me up earlier is now recognized as spent.
What is a major pain in the ass is there is no way that I'm aware of to use the RPC to see if a previous input has been spent. My script may need to just bomb out and keep trying, getting smaller and smaller until it can flag a small subset of txids as already spent.
Just checked it - the coin started up, and one of the txids which had shown as unspent before a -rescan is now NOT showing. Going to try this on that coin again and see if it bombs out. It for sure seems like some of these wallets are not correctly marking some inputs spent.
I've no clue. The blocks that show under 'Stake' in getinfo are all the immature staked blocks. I'm not sure how to see which blocks are being used FOR staking. I turned the reservebalance on but it's apparently not let go of that transaction, because I still cannot use it in a send. It's still showing in listunspent - so it hasnt been spent. But I cannot actually spend it, as it's said it's "used" already. Every time I try and send it, I get a DIFFERENT hash in the debug.log (this line: ERROR: ConnectInputs() : 4ad396f7f0 prev tx already used at (nFile=1, nBlockPos=53949471, nTxPos=53949629) - the blockpos and txpos stay the same, but the hash is different each time. So I've no idea what it's referring to, at all.
Pulling out hair.
Then it definitely broken - once the block is written to the file it may not change its location and thus all transactions included into the block as well. If the wallet reports different tx id each time then it must be a bug.
I looked into some coin code, the tx id printed is the tx id of the transaction being submitted. The file positions printed are positions of the block and tx that already used this input.
The way the listunspent picks usnpent inputs is different to the one used in ConnectInputs. Try to start a wallet with -rescan option. it seems there is an inconsistency.
New issue: Trying to clean up another wallet it keeps failing on the send. Thought it was a fee issue, started looking at the code. Had this in the debug:
ERROR: ConnectInputs() : 4ad396f7f0 prev tx already used at (nFile=1, nBlockPos=53949471, nTxPos=53949629)
I went into the code and see that that 4ad39 is a substr of the first 10 characters. So I had it show the whole thing. It's the same length as a transaction ID, so I wrote all the txids (and the output from listunspent) before it tried to send. I thought maybe in the 2 seconds between listing the unspent, and the send, that it got used somehow.
Well that full string (middle part removed just in case it's part of a private key or god knows what else) 4ad396f7f0b3bdfc30f5f71ab21bxxxxxxxxxxxxxxxxxxxxaa36840c08fe8c93 isnt anywhere in the listunspent output. its not a block hash. Its not a txid in the wallet. No clue what it is. The whole nBlockPos and nTxPos isnt helpful either.
Anyone have a clue what the heck that is referring to? Maybe the wallet is broken and it's listing already-spent inputs erroneously in the listunspent? If I drop the # of transactions batched together down to a smaller number I have some luck.
Another edit: I'm pretty sure this coin is just broken. I reduced the max # of TX that can be batched together to 10, and now, the first 10 go, but when it tries the second 10, it bombs with a 'TX rejected' and there's absolutely nothing in the debug log. Not a size issue (it was 2342 bytes), not a fee issue (fee correctly added). I even added a 3 second wait before starting the next one.
Next thing I'm gonna try is grabbing 10 random inputs to use instead of the 10 smallest value inputs and see what happens.
ANOTHER edit: I'm thinking maybe its because the TX is being used for staking? I know if a wallet is staking coins you can't send them... I isolated one of the 'problem' transactions. Just have no idea how to see which coins the wallet is using for staking at the moment. Hrm
If the input is spent during PoS block generation it is marked as such the same way as any other input: the block is created, signed and passed to ProcessBlock fn. Then if all checks are fine and the block becomes best chain the ConnectInputs marks everything spent by the block as spent in the tx db.
Ok. So then. Lets back this up a bit. I know a far amount about mining coins - I was looking into setting AllCrypt up as a pool/exchange and trying to figure out how to merge mine coins, and when doing that, I learned a lot about how mining works. Super short: There is a target, which is based off the difficulty. Miners slam random numbers through the algo, and see if it beats the target number. If so, you solved the block, submit it.
How the hell does POS work then? All POS coins are "advertised" as you get a % return per
Nope. Not just advertised. They also "mined", but the number of hashes to calculate per second is limited by the number of inputs eligible for staking. The reward (may be subject of upper limiting) for the PoS block is calculated (usually) as a multiple of percentage, input amount and input age. When the input is spent (by any means - regular payment tx or stake mint tx) then it's age reset to zero. The small set of data, 28 bytes length, the is a concatenation of the following data elements:
Code:
ss << nStakeModifier; ss << nTimeBlockFrom << nTxPrevOffset << txPrev.nTime << prevout.n << nTimeTx;
is sha256d'ed. The resulting hash is multiplied by time weight of the input (the number of full days multiplied by the input amount in full coins) and is compared to the current proof-of-stake target. If the condition is met then the stake kernel is found. All of the above parameters but nTimeTx are constant for the same input since it becomes eligible for staking. Only nTimeTx is changed. Thus each input has a chance to find a block only once in a second. The probability of that chance depends on its amount, age, current difficulty and current time since Jan 1, 1970 in seconds. In a sense, if both of us have same set of 1000 inputs (we copied a wallet over), you have a powerful computer that is capable to check all 1000 inputs every second and I have a computer that is capable to iterate only through the half of inputs, you will find the stakes twice often than me. but then, when the input is spent it becomes ineligible for some time. In a long run each input in my wallet would also produce a kernel. Provided that the reward is a percentage of multiple of input age and it's value, the overall income would be the same for you and me.
Initially all this code have been implemented for PPC that uses sha256d as a block header hash, so the overhead to calculate were not significant. For these who has no many inputs even with scrypt or x11, or even x999 block hash function, this implementation would be just fine (the devices like raspberry are out of scope).
Side note: KGW or DGS has nothing to do with hashing - it is just a formula to determine the next difficulty.
Quote
Again - I've googled everything I can think of trying to find a whitepaper or reference, and nothing gives me the info I'm looking for. Combing the code is ungodly painful when I dont even know what I'm looking for.
Linus said once (I am not sure if he is the original author and I am not sure if I reproduce it precisely):
I've no clue. The blocks that show under 'Stake' in getinfo are all the immature staked blocks. I'm not sure how to see which blocks are being used FOR staking. I turned the reservebalance on but it's apparently not let go of that transaction, because I still cannot use it in a send. It's still showing in listunspent - so it hasnt been spent. But I cannot actually spend it, as it's said it's "used" already. Every time I try and send it, I get a DIFFERENT hash in the debug.log (this line: ERROR: ConnectInputs() : 4ad396f7f0 prev tx already used at (nFile=1, nBlockPos=53949471, nTxPos=53949629) - the blockpos and txpos stay the same, but the hash is different each time. So I've no idea what it's referring to, at all.
New issue: Trying to clean up another wallet it keeps failing on the send. Thought it was a fee issue, started looking at the code. Had this in the debug:
ERROR: ConnectInputs() : 4ad396f7f0 prev tx already used at (nFile=1, nBlockPos=53949471, nTxPos=53949629)
I went into the code and see that that 4ad39 is a substr of the first 10 characters. So I had it show the whole thing. It's the same length as a transaction ID, so I wrote all the txids (and the output from listunspent) before it tried to send. I thought maybe in the 2 seconds between listing the unspent, and the send, that it got used somehow.
Well that full string (middle part removed just in case it's part of a private key or god knows what else) 4ad396f7f0b3bdfc30f5f71ab21bxxxxxxxxxxxxxxxxxxxxaa36840c08fe8c93 isnt anywhere in the listunspent output. its not a block hash. Its not a txid in the wallet. No clue what it is. The whole nBlockPos and nTxPos isnt helpful either.
Anyone have a clue what the heck that is referring to? Maybe the wallet is broken and it's listing already-spent inputs erroneously in the listunspent? If I drop the # of transactions batched together down to a smaller number I have some luck.
Another edit: I'm pretty sure this coin is just broken. I reduced the max # of TX that can be batched together to 10, and now, the first 10 go, but when it tries the second 10, it bombs with a 'TX rejected' and there's absolutely nothing in the debug log. Not a size issue (it was 2342 bytes), not a fee issue (fee correctly added). I even added a 3 second wait before starting the next one.
Next thing I'm gonna try is grabbing 10 random inputs to use instead of the 10 smallest value inputs and see what happens.
ANOTHER edit: I'm thinking maybe its because the TX is being used for staking? I know if a wallet is staking coins you can't send them... I isolated one of the 'problem' transactions. Just have no idea how to see which coins the wallet is using for staking at the moment. Hrm
Ok. So then. Lets back this up a bit. I know a far amount about mining coins - I was looking into setting AllCrypt up as a pool/exchange and trying to figure out how to merge mine coins, and when doing that, I learned a lot about how mining works. Super short: There is a target, which is based off the difficulty. Miners slam random numbers through the algo, and see if it beats the target number. If so, you solved the block, submit it.
How the hell does POS work then? All POS coins are "advertised" as you get a % return per
As othe correctly mentioned the blockhash function is calculated per each try as well as sha256d. All these coins have different block hash algo's, so they require different time to get a block hash. I told that I see the implementation far from optimal. I already did some efforts to speed it up (my raspberry pi won't mint all my PoS stakes fast enough), but these changes are still raw and not well tested to make them publicly available.
Edit: JPC is also persistently chews the list of inputs, but its developer merged in some optimizations/performance fixes from NVC.
So I took one of our worst offenders, CoolCoin. I spent all afternoon learning a hell of a lot about creating raw transactions, signing them, transaction size, required fees, etc, I wrote an admit tool that checks a coin and groups up all the tiny unspent inputs, batches them into huge transactions, and sends them back to a new address in the wallet.
I reduced the # of unspent inputs on CoolCoin from 7667 to 85. The CPU usage instantly dropped from 99.8% to 32.9%.
I consider it a partial victory though, because as it continues to stake and we get deposits, the inputs will grow again. In addition, JackpotCoin, a coin that currently has 2432 unspent inputs, is using 34.9% CPU. 30x the number of inputs to check for staking, and using the same amount of CPU time. LOCK has twice the number of unspent inputs, and is almost always below JPC's percentage by at least 5-10%.
So there is absolutely something "wrong" with some of these coins. Going to keep searching. Collecting all the inputs together isn't the best solution. Going to read through the links posted here when I have some more time. Didn't expect to spend my afternoon fixing this mess (but, I learned a hell of a lot of cool new stuff. Might even help AllCrypt actually calculate real transaction fees if we implement internal transaction creation and batching - Not in a hurry to get your withdrawal? Let us batch it up with others, save on fees! Or something similar).
But I'd love to hear further ideas of the causes (and where to look in the code) of where the staking is happening. Just from my own coding experience, it seems like a runaway thread. Like JPC is well coded and only stakes when it needs to, and CoolCoin is a spaztic kid running around screaming "GIMMIE POS COINS WOOWOO!!!1!1!!one!eleven!1!"
So while it might not be used for staking directly theres definately a lot of useless overhead especially if its no longer mineable.
Peercoin based forks should be fine, they use sha for everything.
PS: enough PoS for me for a next months
Ah, yes, I am sorry, you are correct about the call to GetHash inside CheckStakeKernelHash. I have been talking about the seeking a proof-of-stake hash itself. So, on top of that sha256d we also have a call to input's block hash calculation that is not cached anyhow (compare to optimizations done for CBlockIndex). So, the situation is even worse than I described.
Most pos coins use scrypt for staking which is of course utterly stupid, that comes from the fact that they are forked from Novacoin. If they were clever they would have killed scrypt and used Sha256 as the staking PoW...
Quote
0.00 0.00 0.00 2733398524 0.00 0.00 xor_salsa8(unsigned int
This is not true! All PoS coins cloned from PPC and eventually form NVC are using sha256d to find a stake solution. Read the sources carefully: CBlock::GetHash() and ::Hash() have completely different implementations
Most pos coins use scrypt for staking which is of course utterly stupid, that comes from the fact that they are forked from Novacoin. If they were clever they would have killed scrypt and used Sha256 as the staking PoW...
Quote
0.00 0.00 0.00 2733398524 0.00 0.00 xor_salsa8(unsigned int
Going to take one of the offenders and try consolidating all the unspent outputs. One of the coins, litecoinplus, is using 99.8% CPU right now, and is one of our low volume coins. It has 5609 unspent outputs. Not sure if thats an absurdly high number - we've never bothered to look at things like that.
That being said, Jackpotcoin, one of our most popular coins, is on the same server as litecoinplus, is using 30% CPU, and has... wow. 2411 unspent. I would have sworn it was higher, given it's a higher traffic coin. But, I guess with more withdrawals those unspent outputs are being cleaned up a lot more often.
Going to clean up XLC and post the results. If it works, maybe the key is to just run some cleanup maintenance whenever the unspent inputs reach a certain threshold.
Ok, does no matter. Just take that for each input for each second of time it has to perform sha256d over 28 bytes. Then you could estimate how many sha256d it has to calculate during the day apart from other necessary calculations to find a stake.
only profiling will tell, but I'm afraid that the "other necessary calculations to find a stake" part may be more cpu intensive than the sha256d It has to access many blocks to get the "stake modifier" for each tx.
Have you profiled the application? Where's the CPU time being spent? It's entirely possible the algorithm to calculate PoS is broken in a way that still consumes all the CPU time.
It's true PoS should result in lower CPU overhead but any algorithm anywhere, if improperly implemented, can give horrible efficiency.
We actually recompiled one wallet with profiling and analyzed it with gprof, and for some reason all the time %'s were listed as 0. 0% time for everything. Tried it with another tool and it ran so slow, after an hour it still hadn't finished scanning/validating the blockchain. Figured we'd ask here and see if we could find what the issue was in the code itself.
What coin wallet you are talking about? How many unspent inputs do you have in that wallet?
It's a bunch of them. Some we turned POS on work nicely (Lock, JPC, some others) but many just kill the CPU.
Havent checked the unspent outputs, and its 11:22pm here, so, not at the servers to check easily.
Ok, does no matter. Just take that for each input for each second of time it has to perform sha256d over 28 bytes. Then you could estimate how many sha256d it has to calculate during the day apart from other necessary calculations to find a stake.
Have you profiled the application? Where's the CPU time being spent? It's entirely possible the algorithm to calculate PoS is broken in a way that still consumes all the CPU time.
It's true PoS should result in lower CPU overhead but any algorithm anywhere, if improperly implemented, can give horrible efficiency.
We actually recompiled one wallet with profiling and analyzed it with gprof, and for some reason all the time %'s were listed as 0. 0% time for everything. Tried it with another tool and it ran so slow, after an hour it still hadn't finished scanning/validating the blockchain. Figured we'd ask here and see if we could find what the issue was in the code itself.
What coin wallet you are talking about? How many unspent inputs do you have in that wallet?
It's a bunch of them. Some we turned POS on work nicely (Lock, JPC, some others) but many just kill the CPU.
Havent checked the unspent outputs, and its 11:22pm here, so, not at the servers to check easily.
Have you profiled the application? Where's the CPU time being spent? It's entirely possible the algorithm to calculate PoS is broken in a way that still consumes all the CPU time.
It's true PoS should result in lower CPU overhead but any algorithm anywhere, if improperly implemented, can give horrible efficiency.
We actually recompiled one wallet with profiling and analyzed it with gprof, and for some reason all the time %'s were listed as 0. 0% time for everything. Tried it with another tool and it ran so slow, after an hour it still hadn't finished scanning/validating the blockchain. Figured we'd ask here and see if we could find what the issue was in the code itself.
0% time for everything? Are you running the program inside a virtual machine? Virtual machines don't have profiling hardware so they'll ruin gprof results, otherwise I haven't seen that issue.
Nope. Linux box. No VMS. Here's a snippet of the gprof output we got after running it against the gmon.out: Flat profile:
Each sample counts as 0.01 seconds. no time accumulated
Although the stake search thread is set the low priority it will eat up the CPU time if the number of transactions in the wallet is significant. For each input tx that is met staking age conditions it tries to find a so called PoS kernel for each second in the past since it completed the last pass through the set of transactions. The way the "standard" most advanced implementation does the search is sub-optimal IMO.
So, if that is the case, it might behoove us to collect all those small transactions and group them into one huge send back to the wallet to clean them up? I can see how that would get exponentially worse... 300 small transactions lead to 300 small POS rewards which then leads to 600 small transactions leading to 600 smaller POS rewards, which turn into 1200 small POS rewards, etc...
Given we are an exchange with tons of small deposits (from mining pools) we have a much larger than usual number of transactions/addresses in the wallet than 'normal' wallets do. Tomorrow I'll take one of the offending coins and try consolidating all of the transactions into one address and see what that does to the CPU time for the wallet.
Thanks for the tip.
That being said, JPC has been one of our most active coins. It's still one of the most polite ones as far as CPU time and staking.
You could of course merge small inputs into bigger ones. Just keep in mind that some coins, e.g. Nova, has maximum reward limit per PoS stake transaction for the sake of maintaining good PoS difficulty. Otherwise everybody would glue all their inputs and switch on the wallet once a month for 5 minutes.
Edit:
provided presently you do not do stake minting at all it will not hurt the coin if you glue your inputs on your discretion and start to perform staking.
Have you profiled the application? Where's the CPU time being spent? It's entirely possible the algorithm to calculate PoS is broken in a way that still consumes all the CPU time.
It's true PoS should result in lower CPU overhead but any algorithm anywhere, if improperly implemented, can give horrible efficiency.
We actually recompiled one wallet with profiling and analyzed it with gprof, and for some reason all the time %'s were listed as 0. 0% time for everything. Tried it with another tool and it ran so slow, after an hour it still hadn't finished scanning/validating the blockchain. Figured we'd ask here and see if we could find what the issue was in the code itself.
What coin wallet you are talking about? How many unspent inputs do you have in that wallet?