Pages:
Author

Topic: [ATTN: POOL OPERATORS] PoolServerJ - scalable java mining pool backend - page 6. (Read 31153 times)

hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
4hrs left for 300 BTC  Shocked
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
I'm giving Shadders a large bounty for adding Merged-Mine-Proxy any one want to donate to help me out the address is here..

17nk7MqLLNy9Kw3NGqpa4G6qsPtWkUTuUX

It's a wallet I will not spend from for a long time so we can see the results here...
http://blockexplorer.com/address/17nk7MqLLNy9Kw3NGqpa4G6qsPtWkUTuUX
Cheesy
sr. member
Activity: 266
Merit: 254
18hrs... I gotta sleep Wink
18 hrs then triple BTC, 24 hrs double.

After that my offer expires and I will stick to the original agreement.

k well I better stop playing with merkle trees and get some sleep then Smiley
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
18hrs... I gotta sleep Wink
18 hrs then triple BTC, 24 hrs double.

After that my offer expires and I will stick to the original agreement.
sr. member
Activity: 266
Merit: 254
18hrs... I gotta sleep Wink
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
shadders I will double my reward if you incorporate merge mining in the next 24 hours.  I will triple it if you get it to me in the next 12 hours.
vip
Activity: 1358
Merit: 1000
AKA: gigavps
Hi Shadders, could you please comment on this post regard x-roll-ntime -> https://bitcointalksearch.org/topic/m.560731
full member
Activity: 142
Merit: 100
Adjusting maxWorkAgeToFlush and maxCacheSize helps a lot.

Try to raise maxWorkAgeToFlush to 300 and lower maxCacheSize to 1000 for example and take a look at your cpu usage again.

Make sure you got bitcoind patched with 4diff.

A more detailed documentation:
http://poolserverj.org/documentation/performance-memory-tuning/
full member
Activity: 207
Merit: 100
An update:  BTC Guild is running PoolServerJ for the entire pool.  We were able to push out 10 pushpool/10 bitcoind nodes with load balancing and replace them with a single PoolServerJ and 2 bitcoind nodes. 

you forgot to mention the whopping 16% cpu load... but I'm glad yo forgot to mention the memory usage Smiley

Hmm, could you post some configuration perhaps?  I've been getting a very high CPU load with PoolServerJ.  Really need a way to reduce it.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?
sr. member
Activity: 266
Merit: 254
An update:  BTC Guild is running PoolServerJ for the entire pool.  We were able to push out 10 pushpool/10 bitcoind nodes with load balancing and replace them with a single PoolServerJ and 2 bitcoind nodes. 

you forgot to mention the whopping 16% cpu load... but I'm glad yo forgot to mention the memory usage Smiley
legendary
Activity: 1750
Merit: 1007
An update:  BTC Guild is running PoolServerJ for the entire pool.  We were able to push out 10 pushpool/10 bitcoind nodes with load balancing and replace them with a single PoolServerJ and 2 bitcoind nodes. 
legendary
Activity: 1750
Merit: 1007
After hammering out the last few bugs we found at BTC Guild with PoolServerJ, I'm almost ready to completely remove pushpool from my servers.

While the CPU load of PoolServerJ is higher than pushpool, I would not call it inefficient since PoolServerJ is doing far more work than pushpool.  PoolServerJ is doing full difficulty checks internally, prioritizing getwork responses to known good clients [QoS filtering], organizing work from multiple bitcoind nodes [faster LP delivery], running a cache of work so miner requests are responded to from the server rather than the server proxying the request to bitcoind [faster getwork delivery].  In the end, as long as the servers have enough extra RAM, the performance is outstanding.

PoolServerJ's work caching means if bitcoind stutters for a second, you have work ready and available to send to your miners, whereas pushpool becomes useless until bitcoind can respond, and cause miners to complain.

The tradeoff is simple:  pushpool will run on minimal specs, but becomes slow to respond after a certain level of load, even though CPU & RAM are sitting idle.  PoolServerJ will use significantly more RAM, but as long as its there, it won't choke until either your bitcoind can't provide the work as fast as its being requested, or your CPU is at full utilization.  A small pool would probably stick with pushpool, due to its low footprint and reasonable performance.  Any pool that is starting to have growing pains (they start around 250 GH/sec but they aren't "a problem" til ~450 GH/sec) would benefit greatly from taking a look at PoolServerJ.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
What is "worker cache preloading"?

Well as it say it's not activated... you'd have to make some code mods and rebuild from source to use atm...

But in a nutshell... when a busy pool comes up it's worker cache is empty. It suddenly get's hit by a ton of requests which translates into a ton of single selects to the db.  Preloading dumps the worker id's from the cache to a file on shutdown.  Then on start up it grabs the worker id's and does a single bulk select to fill the worker cache.  Much more efficient but probably not an issue until you get to the terahash range.

This is where memcache can help.  A pool would have a memcache server that has all the usernames and passwords cashed when it reboots one of the servers the cache will have all the users assuming the users where connecting to another server before hand.

This allows for clustering so all servers have the same cached username and password.

What's your opinion of this design?
sr. member
Activity: 266
Merit: 254
What is "worker cache preloading"?

Well as it says it's not activated... you'd have to make some code mods and rebuild from source to use atm...

But in a nutshell... when a busy pool comes up it's worker cache is empty. It suddenly get's hit by a ton of requests which translates into a ton of single selects to the db.  Preloading dumps the worker id's from the cache to a file on shutdown.  Then on start up it grabs the worker id's and does a single bulk select to fill the worker cache.  Much more efficient but probably not an issue until you get to the terahash range.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
Changelog:

[0.3.0.FINAL]
- partial implementation of worker cache preloading.  This is not active yet.
- fix: stop checking if continuation state is initial.  It can be if a previous Jetty filter has suspended/resumed the request.  In that case it immediately sends and empty LP response.  This might be the cause of a bug where cgminer immediately sends another LP.  This turns into a spam loop.  This only seems to be triggered under heavy load and only seems to happen with cgminer clients connected.
- added commented out condition to stop manual block checks if native LP enabled and verification off.
- remove warning for native LP when a manual block check is fired.  We want this occur in most circumstances.
- extra trace targets for longpoll empty and expired responses.
- fix: handle clients sending longpoll without trailing slash.  This can result in the LP request being routed through the main handler and returning immediately setting up request spamming loop.  This patch checks for the LP url from the main handler and redirects to the LP handler if it's found.
- add threadDump method to mgmt interface
- add timeout to notify-lp-clients-executor thread in case dispatch threads do not report back correctly and counters aren't updated.  Solved a problem where counter mismatch can prevent the thread from ever finishing thus hogging the executor and preventing future long poll cycles.
- add shutdown check to lp dispatch timeout

What is "worker cache preloading"?
sr. member
Activity: 266
Merit: 254
Changelog:

[0.3.0.FINAL]
- partial implementation of worker cache preloading.  This is not active yet.
- fix: stop checking if continuation state is initial.  It can be if a previous Jetty filter has suspended/resumed the request.  In that case it immediately sends and empty LP response.  This might be the cause of a bug where cgminer immediately sends another LP.  This turns into a spam loop.  This only seems to be triggered under heavy load and only seems to happen with cgminer clients connected.
- added commented out condition to stop manual block checks if native LP enabled and verification off.
- remove warning for native LP when a manual block check is fired.  We want this occur in most circumstances.
- extra trace targets for longpoll empty and expired responses.
- fix: handle clients sending longpoll without trailing slash.  This can result in the LP request being routed through the main handler and returning immediately setting up request spamming loop.  This patch checks for the LP url from the main handler and redirects to the LP handler if it's found.
- add threadDump method to mgmt interface
- add timeout to notify-lp-clients-executor thread in case dispatch threads do not report back correctly and counters aren't updated.  Solved a problem where counter mismatch can prevent the thread from ever finishing thus hogging the executor and preventing future long poll cycles.
- add shutdown check to lp dispatch timeout
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
So PoolSeverJ is working perfectly with my patched namecoind.  The CGIMiner is unable to take down the pool with scantime set to 1. 

Thanks you have been a big help in fact you have helped me the most.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
if yr unable to get it working you could try this one from ArtForz:
https://github.com/ArtForz/namecoin/commit/127deb4aff13965741130dba7304073330a4adea

I think it's only for the duplicate work issue but at a quick glance it looks like it's applicable to the merged mining version.

It does not compile namecoind Cry

Code:
root@bitcoinpool:/home/bitcoinpool/ArtForz-namecoin-127deb4/src# make -f makefile.unix namecoind
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -DNOPCH -DFOURWAYSSE2 -DUSE_SSL -o obj/nogui/namecoin.o namecoin.cpp
namecoin.cpp: In function âjson_spirit::Value name_history(const json_spirit::Array&, bool)â:
namecoin.cpp:616:30: error: âforeachâ was not declared in this scope
namecoin.cpp:617:9: error: expected â;â before â{â token
namecoin.cpp:1789:84: error: expected â}â at end of input
namecoin.cpp:1789:84: error: expected â}â at end of input
make: *** [obj/nogui/namecoin.o] Error 1

Commenting out the code in name_history function (its not needed for mining) does the trick.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
if yr unable to get it working you could try this one from ArtForz:
https://github.com/ArtForz/namecoin/commit/127deb4aff13965741130dba7304073330a4adea

I think it's only for the duplicate work issue but at a quick glance it looks like it's applicable to the merged mining version.

It does not compile namecoind Cry

Code:
root@bitcoinpool:/home/bitcoinpool/ArtForz-namecoin-127deb4/src# make -f makefile.unix namecoind
g++ -c -O2 -Wno-invalid-offsetof -Wformat -g -D__WXDEBUG__ -DNOPCH -DFOURWAYSSE2 -DUSE_SSL -o obj/nogui/namecoin.o namecoin.cpp
namecoin.cpp: In function âjson_spirit::Value name_history(const json_spirit::Array&, bool)â:
namecoin.cpp:616:30: error: âforeachâ was not declared in this scope
namecoin.cpp:617:9: error: expected â;â before â{â token
namecoin.cpp:1789:84: error: expected â}â at end of input
namecoin.cpp:1789:84: error: expected â}â at end of input
make: *** [obj/nogui/namecoin.o] Error 1
Pages:
Jump to: