Pages:
Author

Topic: [20 BTC] Multithreaded Keep-alive Implementation in Bitcoind - page 7. (Read 31457 times)

member
Activity: 112
Merit: 10
I'll run it on testnet to see what it gives
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.

I have some updates available at:
http://davids.webmaster.com/~davids/updates.diff.txt

This includes a performance improvement for ToHex, some style cleanups, some tuning changes to hub mode to reduce outbound connections, and a huge change to 'getwork'.

The change to 'getwork' is one I attempted before with not so good results, but I've rethought the way it's implemented, and this is much simpler. The idea is to update the 'skeleton block' used to build 'getwork' requests when a new transaction or block is received rather than waiting for the next 'getwork' request to come along to do it. The reason this makes a difference is that it means the 'getwork' request won't have to wait for the code handling the new block/transaction to release the 'cs_main' lock. Testing shows this really smooths out the 'spikes' in the 'getwork' response time.

However, this is a risky change. It is invasive to not just the code to get a work unit but also the code to confirm a valid block. Some people may wish to apply just the ToHex change and the cleanups.

member
Activity: 170
Merit: 10
fyi: renamed the repository to bitcoin4pools, new url is:

https://github.com/somebadger/bitcoin4pools

Any addition suggestions are welcome, will start a new thread later today and reference this thread and give credit to JoelKatz Smiley
full member
Activity: 228
Merit: 100
Thanks a lot for your help, so far it is all running perfect.
But I think the problem was somewhere else, after a little bit more testing I got an exception.
It had something to do with libboost, I did reinstall that and the problem was solved.

I'll send you small donation(haven't that much, yet Wink) in the next days.

And the patch is from jgarzik, I'll make a pull with your patch and of course your credits of course.

regards, talpan
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
I can't find the problem.
I've compiled it many times from scratch and every time the same result.
Can you test it on your end?
Are you saying it works fine with just my patches but if you add the 'getblock' patches, then you have problems? Are you using the 'getblock*' RPC calls that the patch adds? It looks to me like they access mapBlockIndex without holding the appropriate lock. Can you try this change: (And if this fixes it, you may want to share this fix with whoever wrote those patches!)

--- patch/rpc.cpp       2011-07-06 07:39:21.647871060 -0700
+++ rpc.cpp     2011-07-06 07:41:38.033681011 -0700
@@ -277,9 +277,13 @@ Value getblockbycount(const Array& param
 
     string blkname = strprintf("blk%d", height);
 
+    CBlock block;
     CBlockIndex* pindex;
     bool found = false;
 
+    CRITICAL_BLOCK(cs_main)
+    {
+
     for (map::iterator mi = mapBlockIndex.begin();
          mi != mapBlockIndex.end(); ++mi)
     {
@@ -295,9 +299,9 @@ Value getblockbycount(const Array& param
             "getblockbycount height\n"
             "Dumps the block existing at specified height");
 
-    CBlock block;
     block.ReadFromDisk(pindex);
     block.BuildMerkleTree();
+    }
 
     return BlockToValue(block);
 }
@@ -312,6 +316,10 @@ Value getblockbyhash(const Array& params
 
     uint256 hash;
     hash.SetHex(params[0].get_str());
+    CBlock block;
+
+    CRITICAL_BLOCK(cs_main)
+    {
 
     map::iterator mi = mapBlockIndex.find(hash);
     if (mi == mapBlockIndex.end())
@@ -319,9 +327,9 @@ Value getblockbyhash(const Array& params
 
     CBlockIndex* pindex = (*mi).second;
 
-    CBlock block;
     block.ReadFromDisk(pindex);
     block.BuildMerkleTree();
+    }
 
     return BlockToValue(block);
 }
hero member
Activity: 755
Merit: 515
B. Not even pool operators should be using this, a pool bitcoind should be accepting incoming connections, and if you do that, you will get plenty of connections really fast.
Yes, but from lots of small nodes. That doesn't help you very much.
True, but those nodes are probably only connected to other large nodes who listen, making two hops is really very fast anyway.

If you want to make sure your blocks are more likely to get accepted quicker, find out the IPs of other large miner nodes and addnode those instead of just blindly making a ton of connections as this patch does.
Won't those generally be the nodes that are accepting incoming connections?
Yep, but if you blindly make a ton of connections, you are still fairly unlikely to make connections to other miners, better to just addnode the best ones and make 10 connections instead of making 30 and only getting a couple large miners.
member
Activity: 112
Merit: 10

Quote
If you want to make sure your blocks are more likely to get accepted quicker, find out the IPs of other large miner nodes and addnode those instead of just blindly making a ton of connections as this patch does.
Won't those generally be the nodes that are accepting incoming connections?

I tend to use https://en.bitcoin.it/wiki/Fallback_Nodes , but the wikibot doesnt seem to update it.
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
No, this is really not a good change for several reasons.  A. the percent of nodes accepting incoming connections is already low enough, encouraging people to start making a ridiculous number of outgoing connections and filling up the few nodes who do accept incoming connection's connections slots is not a good idea at all.
That's not a point I had considered. I think I may make some adjustments.

Quote
B. Not even pool operators should be using this, a pool bitcoind should be accepting incoming connections, and if you do that, you will get plenty of connections really fast.
Yes, but from lots of small nodes. That doesn't help you very much.

Quote
If you want to make sure your blocks are more likely to get accepted quicker, find out the IPs of other large miner nodes and addnode those instead of just blindly making a ton of connections as this patch does.
Won't those generally be the nodes that are accepting incoming connections?

Thanks for the feedback. You've given me a few ideas for ways to make this work better.
hero member
Activity: 755
Merit: 515
Some of them may not, some of them might. Once I'm convinced this code is stable, I can work on getting some of them into the main distribution. I think the hub mode changes and native long polling could get in. The RPC turbocharge and keepalive fixes could.

Some of the stuff in the latest diff might get in too. Stashing the RPC user/pass, avoiding re-processing partial blocks, faster base64 decode, and faster bin2hex all might get in. (Change 3diff to 4diff to see them.)

Some code should never get in. The JSON bypass code is awful. Perhaps a change to using Jansson is possible. But that's a pretty big change.
1) Micro-optimizations
Havent had much of a chance to really look through these, but some of them look pull-able others maybe not, but I haven't looked too closely.

2) Native Long Polling Support
This one I'm not sure about, its a useful feature to be sure, but I'm not sure about pulling yet.  When the client codebase gets cleaned up a ton, features like this could very well be pulled, but IMHO not so much yet.

3) A fix from Luke Dash Jr.
Almost certainly can be pulled.

4) Hub Mode
No, this is really not a good change for several reasons.  A. the percent of nodes accepting incoming connections is already low enough, encouraging people to start making a ridiculous number of outgoing connections and filling up the few nodes who do accept incoming connection's connections slots is not a good idea at all.  B. Not even pool operators should be using this, a pool bitcoind should be accepting incoming connections, and if you do that, you will get plenty of connections really fast.  If you want to make sure your blocks are more likely to get accepted quicker, find out the IPs of other large miner nodes and addnode those instead of just blindly making a ton of connections as this patch does.

5) RPC Turbocharge
Absolutely, there is already a asio patch for this waiting as a pull request, so some version of asio/keepalive support/threaded RPC will absolutely be merged at some point in the not-too-distant future.

6) Resource Leak Fix
Absolutely, thanks somebadger for the pull.
full member
Activity: 228
Merit: 100
I can't find the problem.
I've compiled it many times from scratch and every time the same result.
Can you test it on your end?
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
That patch looks safe to me. I don't think that explains your issue.
full member
Activity: 228
Merit: 100
Hi,

Sorry I made a mistake, I'm using a custom .23 with a little patch.
Without this patch and with the patch from everythings fine.
But I can't figure out whats wrong with this patch:

Code:
diff -up orig/rpc.cpp new/rpc.cpp
--- orig/rpc.cpp
+++ new/rpc.cpp
@@ -184,6 +184,149 @@
 }
 
 
+Value BlockToValue(CBlock &block)
+{
+    Object obj;
+    obj.push_back(Pair("hash", block.GetHash().ToString().c_str()));
+    obj.push_back(Pair("version", block.nVersion));
+    obj.push_back(Pair("prev_block", block.hashPrevBlock.ToString().c_str()));
+    obj.push_back(Pair("mrkl_root", block.hashMerkleRoot.ToString().c_str()));
+    obj.push_back(Pair("time", (uint64_t)block.nTime));
+    obj.push_back(Pair("bits", (uint64_t)block.nBits));
+    obj.push_back(Pair("nonce", (uint64_t)block.nNonce));
+    obj.push_back(Pair("n_tx", (int)block.vtx.size()));
+    obj.push_back(Pair("size", (int)::GetSerializeSize(block, SER_NETWORK)));
+
+    Array tx;
+    for (int i = 0; i < block.vtx.size(); i++) {
+    Object txobj;
+
+ txobj.push_back(Pair("hash", block.vtx[i].GetHash().ToString().c_str()));
+ txobj.push_back(Pair("version", block.vtx[i].nVersion));
+ txobj.push_back(Pair("lock_time", (uint64_t)block.vtx[i].nLockTime));
+ txobj.push_back(Pair("size",
+ (int)::GetSerializeSize(block.vtx[i], SER_NETWORK)));
+
+ Array tx_vin;
+ for (int j = 0; j < block.vtx[i].vin.size(); j++) {
+     Object vino;
+
+     Object vino_outpt;
+
+     vino_outpt.push_back(Pair("hash",
+     block.vtx[i].vin[j].prevout.hash.ToString().c_str()));
+     vino_outpt.push_back(Pair("n", (uint64_t)block.vtx[i].vin[j].prevout.n));
+
+     vino.push_back(Pair("prev_out", vino_outpt));
+
+     if (block.vtx[i].vin[j].prevout.IsNull())
+     vino.push_back(Pair("coinbase", HexStr(
+ block.vtx[i].vin[j].scriptSig.begin(),
+ block.vtx[i].vin[j].scriptSig.end(), false).c_str()));
+     else
+     vino.push_back(Pair("scriptSig",
+ block.vtx[i].vin[j].scriptSig.ToString().c_str()));
+     if (block.vtx[i].vin[j].nSequence != UINT_MAX)
+     vino.push_back(Pair("sequence", (uint64_t)block.vtx[i].vin[j].nSequence));
+
+     tx_vin.push_back(vino);
+ }
+
+ Array tx_vout;
+ for (int j = 0; j < block.vtx[i].vout.size(); j++) {
+     Object vouto;
+
+     vouto.push_back(Pair("value",
+     (double)block.vtx[i].vout[j].nValue / (double)COIN));
+     vouto.push_back(Pair("scriptPubKey",
+ block.vtx[i].vout[j].scriptPubKey.ToString().c_str()));
+
+     tx_vout.push_back(vouto);
+ }
+
+ txobj.push_back(Pair("in", tx_vin));
+ txobj.push_back(Pair("out", tx_vout));
+
+ tx.push_back(txobj);
+    }
+
+    obj.push_back(Pair("tx", tx));
+
+    Array mrkl;
+    for (int i = 0; i < block.vMerkleTree.size(); i++)
+    mrkl.push_back(block.vMerkleTree[i].ToString().c_str());
+
+    obj.push_back(Pair("mrkl_tree", mrkl));
+
+    return obj;
+}
+
+
+Value getblockbycount(const Array& params, bool fHelp)
+{
+    if (fHelp || params.size() != 1)
+        throw runtime_error(
+            "getblockbycount height\n"
+            "Dumps the block existing at specified height");
+
+    int64 height = params[0].get_int64();
+    if (height > nBestHeight)
+        throw runtime_error(
+            "getblockbycount height\n"
+            "Dumps the block existing at specified height");
+
+    string blkname = strprintf("blk%d", height);
+
+    CBlockIndex* pindex;
+    bool found = false;
+
+    for (map::iterator mi = mapBlockIndex.begin();
+         mi != mapBlockIndex.end(); ++mi)
+    {
+    pindex = (*mi).second;
+ if ((pindex->nHeight == height) && (pindex->IsInMainChain())) {
+ found = true;
+ break;
+ }
+    }
+
+    if (!found)
+        throw runtime_error(
+            "getblockbycount height\n"
+            "Dumps the block existing at specified height");
+
+    CBlock block;
+    block.ReadFromDisk(pindex);
+    block.BuildMerkleTree();
+
+    return BlockToValue(block);
+}
+
+
+Value getblockbyhash(const Array& params, bool fHelp)
+{
+    if (fHelp || params.size() != 1)
+        throw runtime_error(
+            "getblockbyhash hash\n"
+            "Dumps the block with specified hash");
+
+    uint256 hash;
+    hash.SetHex(params[0].get_str());
+
+    map::iterator mi = mapBlockIndex.find(hash);
+    if (mi == mapBlockIndex.end())
+        throw JSONRPCError(-18, "hash not found");
+
+    CBlockIndex* pindex = (*mi).second;
+
+    CBlock block;
+    block.ReadFromDisk(pindex);
+    block.BuildMerkleTree();
+
+    return BlockToValue(block);
+}
+
+
 Value getconnectioncount(const Array& params, bool fHelp)
 {
     if (fHelp || params.size() != 0)
@@ -1423,6 +1566,8 @@
 {
     make_pair("help",                  &help),
     make_pair("stop",                  &stop),
+    make_pair("getblockbycount",       &getblockbycount),
+    make_pair("getblockbyhash",        &getblockbyhash),
     make_pair("getblockcount",         &getblockcount),
     make_pair("getblocknumber",        &getblocknumber),
     make_pair("getconnectioncount",    &getconnectioncount),
@@ -2113,6 +2258,7 @@
         if (strMethod == "listtransactions"       && n > 1) ConvertTo(params[1]);
         if (strMethod == "listtransactions"       && n > 2) ConvertTo(params[2]);
         if (strMethod == "listaccounts"           && n > 0) ConvertTo(params[0]);
+        if (strMethod == "getblockbycount"        && n > 0) ConvertTo(params[0]);
         if (strMethod == "sendmany"               && n > 1)
         {
             string s = params[1].get_str();

regards, talpan
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
Hello,

I've patched the official .23 with patch 4.
so far so good, no errors during compiling.
But when i start bitcoind the cpu spikes after 40-50s through the roof, without using it.

regards, talpan
Hmm, that's pretty strange. Can you confirm you're using version 0.95?
member
Activity: 112
Merit: 10
So yes, i accept this type 'recentstale' for payout.

So i would be able to resend the same stale-share, over and over again? Or just keep sending stales - it doesn't help the pool at ALL, but i get paid more? Cheesy
No seriously, I'm disappointed.

I never said that i accept stales .. only those specific recentstales ..

Happy disappointment day !

https://www.triplemining.com - Probably the best pool in the world
sr. member
Activity: 403
Merit: 250
So yes, i accept this type 'recentstale' for payout.

So i would be able to resend the same stale-share, over and over again? Or just keep sending stales - it doesn't help the pool at ALL, but i get paid more? Cheesy
No seriously, I'm disappointed.
full member
Activity: 228
Merit: 100
Hello,

I've patched the official .23 with patch 4.
so far so good, no errors during compiling.
But when i start bitcoind the cpu spikes after 40-50s through the roof, without using it.

regards, talpan
member
Activity: 112
Merit: 10
So you're actually lying to your users, about their amount of shares?  Shocked
Or are you paying for those invalid shares as well?


Well the idea of pushpool shares at difficulty 1 is that it serves as proof of work, and i think that it is a proof of work, it's not their fault that they submit work that has become invalid.

So yes, i accept this type 'recentstale' for payout.

Why feel bad about this ? proof of work is proof of work ?
sr. member
Activity: 403
Merit: 250
So you're actually lying to your users, about their amount of shares?  Shocked
Or are you paying for those invalid shares as well?

/ Jim

EDIT: I feel rather bad about that, to be honest.
member
Activity: 112
Merit: 10
Thanks for that report. I'll check the code and see if I can get the SIGUSR1 to be sent faster. I think that will solve half the problem though. While pushpool will issue fewer blocks that will turn into stale shares, bitcoind will still be processing the new block when all the new 'getwork' requests start piling in, and you'll get miner idles instead.

I have a fix for this that has other benefits as well, but the first time I tried to implement it, it was a disaster. I think I know the right way to do it now, having run out of wrong ways to do it already. What I'd like to do is pre-compute the skeleton 'getwork' block when a new block or new transaction comes in, rather than waiting to hijack the next 'getwork' request to do it. But to make this work, the locking has to be exactly right. The funny thing is, in my day job ten years ago, this was exactly the type optimization I specialized in. And now I can't seem to quite get it right.


I solved it with what i call the "pushpool feelgood patch". This patch accepts solutions that he knows are from the prev block, and marks them that way in the database rather then rejecting them to the miner.

This way the users have 0 rejected shares, and they feel good!

Thanks again for all your work, and i'll be following all of your updates Smiley
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
JoelKatz's optimizations here will probably never make it into the official bitcoin github account, will they? Because they typically only make sense for pool operators or am I wrong?
Some of them may not, some of them might. Once I'm convinced this code is stable, I can work on getting some of them into the main distribution. I think the hub mode changes and native long polling could get in. The RPC turbocharge and keepalive fixes could.

Some of the stuff in the latest diff might get in too. Stashing the RPC user/pass, avoiding re-processing partial blocks, faster base64 decode, and faster bin2hex all might get in. (Change 3diff to 4diff to see them.)

Some code should never get in. The JSON bypass code is awful. Perhaps a change to using Jansson is possible. But that's a pretty big change.

Quote
The patches I believe will definitely make it in because that can benefit everybody. (Great work!)
...
Great work JoelKatz. Thank you for that contribution to the community!
Thanks.
Pages:
Jump to: