Pages:
Author

Topic: PoolServerJ - Tech Support - page 7. (Read 27531 times)

legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
October 08, 2011, 09:28:09 PM
#34
Can you please add the X-Roll-Ntime header?
legendary
Activity: 1750
Merit: 1007
October 06, 2011, 03:34:12 PM
#33
If it's not some other fault then something is probably causing that gap to be higher than it should... watch the log during a block change with debug=true.  You should see when the new block is detected and it should also tell when all LP responses have been dispatched (and how long it took).  Anything longer a 1000ms is not ideal.  Large pools are able to push several thousand in under a second with psj.

Just to add a reference point, BTC Guild pushes out between 6,000 and 8,000 LPs depending on the time of day.  Our average time is between 600ms and 1000ms.  It can be even faster if both of our bitcoinds detected the new block simultaneously (our record was 490ms for ~7,500 LPs).  This is with two bitcoind clients, running on dedicated servers.  One is local on the same server as PSJ, one is running on another server in the same datacenter which runs the database.

This may improve even more soon, ArtForz has been looking into an extra optimization in bitcoind's getwork code and merging it with JoelKatz's 4diff patch.
sr. member
Activity: 266
Merit: 254
September 30, 2011, 05:22:13 AM
#32
First, awesome work there shad... PoolServerJ performance over pushpool is great.

Just got a two short question and a request Cheesy

1)
 atm i'm running poolserverj in screen and it works great, but if anyone got other suggestion please post Cheesy

2)
When i check the screen it looks like this:

Doing database flush for Shares: 10
Flushed 10 shares to DB in 9.0ms (1111/sec)
Trimmed 14 entries from workmap and 29 entries from duplicate check sets in 0ms
Dropping submit throttle to 2ms
Submit Throttling on: false
Doing database flush for Shares: 7
Flushed 7 shares to DB in 4.0ms (1749/sec)
Submit Throttling on: false
Doing database flush for Shares: 15
Flushed 15 shares to DB in 8.0ms (1874/sec)
Trimmed 16 entries from workmap and 1003 entries from duplicate check sets in 0ms
Submit Throttling on: false
Doing database flush for Shares: 14
Flushed 14 shares to DB in 4.0ms (3499/sec)

I just wonder what does Submit Throttling on: false mean?

Request:
I also got an request about ?method=getsourcestats, it would be nice to have a short description on what each stats really mean, if you could make a short list it would be nice.

Sorry all dumb question Smiley

Btw, i run bitcoind 0.4.0, did switch from bitcoind 0.3.24(patched with joelkatz diff4) and it seems to run great with the new bitcoind.

/Best regards

1/ daemonizing poolserverj has been on my todo list since day one but this is actually the first time anyone has asked about it.  I'm not much of a bash expert but I think if you do something like this:

normalStartCommand > mylogfile.txt &

that should redirect the output to a file and run it as a background process so you can keep using your shell. 

2/ submit throttling is a fairly useless feature.  It kicks in if a db flush is triggered and the previous one hasn't finished yet.  It was useful during stress testing when I had a fixed number of fake miners so it would effectively throttle the submit rate.  In the real world though it won't really have any effect aside from giving the miner a short delay before they receive a response to their share submits...  I will get rid of it one of these days.

3/ getsourcestats is also something I built as an aid to testing so I didn't really give much though to making it readable.  Most of the stats are useful, some of them do nothing and some give completely rubbish results.  I also have a TODO to turn that in proper API probably with JSON output so when that's done and I've stripped out the useless stuff and put some other more useful things in I'll write up some doco...

But for now here's a brief rundown (comments in italics):

Memory used: 4.4375 MB - Freed by GC: 9.900390625MB

This only refers to heap memory.  Depending on how many connection threads you've got assigned you can add about 10-30mb to this to get real memory usage.  The total of 'used' + 'freed' is approx the currently allocated heap size.

State [bitcoind-patch] these stats are per daemon, if you multiples you see this section repeated
   
    Current Cache Max: 1000 - min/max: 1/1000
    Current Cache Size: 998
this is the work cache.  min/max are meaningless atm.  There used to be a dynamic cache sizing algorithm but it was a bit crap so I took it out.  The 1000 represent the max number before it will stop calling for more work.  The 998 is current number in cache.  This can occasionally creep over your max by up to Concurrent DL Requests.

    Concurrent DL Requests: 10 - min/max: 1/20

This is a gotcha, currently this gets set to 1/2 the value you've set.  Another hangover from the old dynamic cache sizing algo which also regulated request rate.

    DL Request Interval (ms): 0 - min/max: 0/100
    Current Ask Rate (works/req): 10

these 2 are rubbish

    Consecutive Connect Fails: 0
    Consecutive Http Fails: 0
    Consecutive Http Auth Fails: 0
    Consecutive Http Busy Fails: 0

These do actually work but they'll always be zero unless yr daemon crashes or you have network problems between psj and bitcoind.

    Cache Excess: 1,002.5
    Cache Excess Trend: 0.06

rubbish

    Cache Retreival Age: 13885

avg age of work work when retrieved from cache.  measured in millis from when it was retrieved from daemon

    Incoming Rate: 36.69/sec

how many works/sec you're getting from the daemon.  This doesn't represent the maximum but if you look straight after a block change for a few seconds it will probably be close.

    Incoming Fullfillment Rate: 100%

number request/number received.  This will drop under 100% if you have http errors or daemon stops responding to requests for any reason.

    Outgoing Requested Rate: 0.78/sec
    Outgoing Delivered Rate: 0.78/sec
    Outgoing Fullfillment Rate: 100%

same thing basically.  Except if fullfillment drops below it most likely means psj can't get work from the daemon fast enough.

Longpoll Connections: 1 / 1000

Ignore the 1000.  There used to be a limit but there's not anymore.  Connections will include connections that have been silently dropped by the client but not connection that have expired.

WorkSource Stats:
      Stats for source: [bitcoind-patch]
        Current Block: 147243
        Cache:
          Work Received: 42965
          Work Delivered: 978

fairly obvious, duplicate below

          Upstream Request Fail Rate: 0%
          Upstream Request Fail Rate Tiny: 0%
rubbish

          Immediately Serviced Rate: 91.2%

This one is worth a comment.  Immediately serviced mean miner request work, psj checked cache and found one already available.  If the cache is empty it will wait a few seconds for one to arrive and this counts as 'not immediate'  In reality it might only be a millisecond or two.

          MultiGet Delivery Rate: ?%
          Delayed Serviced Rate: 100%
          Not Serviced Rate: 0%
          Expired Work Rate: 100%
rubbish

          Duplicate Work Rate: 0%

Usually 0 but if you have an unpatched daemon there's a bug that cause duplicates... If you ever see this higher than 0.01% keep an eye on it, it's a definately indicator something is wrong.

          Cache Growth Rate: 17.228%
          Cache Growth Rate Short: 29.76%

rubbish

        Work Submissions:
          Work Supplied: 978
          Work Submitted: 0
          Work Submitted Invalid: 0
          Work Submitted Unknown: 0
          Work Submitted Valid: 0
          Work Submitted Valid Real: 0

not rubbish but doesn't work.

        HTTP:
          Requests Issued: 42972
          Fail Rate: 0%
          Success trip time: 24.56 ms
          Header trip time: 24.55 ms
          Fail trip time: ? ms
          Expire trip time: ? ms

These are about the HTTP connection between psj and daemon.  Nothing to do with miner side of the server.  The trip times are a useful for tuning max concurrent connections... Once the latency start to go up dramatically you've probably got too many.

        Cache Age:
          Entries: 998 Oldest: 10337 Newest: 4032 Avg: 5418 Reject Rate: 0%

stats on the current contents of the cache.  Oldest. Newest, Avg are age in millis.  Reject Rate is basically the same thing a Duplicate rate except with a different moving average period.
sr. member
Activity: 266
Merit: 254
September 30, 2011, 04:46:34 AM
#31
Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?

It's easy to do but I'd have to make a minor code change and add it as a config option so it doesn't break backward compatibility.

If you log a feature request on the source code repo site I'll add it to the todo list.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
September 29, 2011, 02:18:40 PM
#30

2)
When i check the screen it looks like this:

Dude I have it running with "screen" command how do you check the screen?

newbie
Activity: 41
Merit: 0
September 28, 2011, 10:45:26 AM
#29
First, awesome work there shad... PoolServerJ performance over pushpool is great.

Just got a two short question and a request Cheesy

1)
 atm i'm running poolserverj in screen and it works great, but if anyone got other suggestion please post Cheesy

2)
When i check the screen it looks like this:

Doing database flush for Shares: 10
Flushed 10 shares to DB in 9.0ms (1111/sec)
Trimmed 14 entries from workmap and 29 entries from duplicate check sets in 0ms
Dropping submit throttle to 2ms
Submit Throttling on: false
Doing database flush for Shares: 7
Flushed 7 shares to DB in 4.0ms (1749/sec)
Submit Throttling on: false
Doing database flush for Shares: 15
Flushed 15 shares to DB in 8.0ms (1874/sec)
Trimmed 16 entries from workmap and 1003 entries from duplicate check sets in 0ms
Submit Throttling on: false
Doing database flush for Shares: 14
Flushed 14 shares to DB in 4.0ms (3499/sec)

I just wonder what does Submit Throttling on: false mean?

Request:
I also got an request about ?method=getsourcestats, it would be nice to have a short description on what each stats really mean, if you could make a short list it would be nice.

Sorry all dumb question Smiley

Btw, i run bitcoind 0.4.0, did switch from bitcoind 0.3.24(patched with joelkatz diff4) and it seems to run great with the new bitcoind.

/Best regards
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
September 28, 2011, 09:29:07 AM
#28
Quick question?

Is there a way to include the USER_ID found in the worker table when inserting into the shares table?
sr. member
Activity: 266
Merit: 254
September 22, 2011, 05:40:52 PM
#27
Is it very likely  then, that it is caused by the server performance overall?

ok looking at yr top output it looks like your system is definately paging.  You should probably restrict java's max heap size as it's default is quite greedy.  Take a look at: http://poolserverj.org/documentation/performance-memory-tuning/

Particularly the very last section: "Limit the JVM Heap Size"

sr. member
Activity: 266
Merit: 254
September 22, 2011, 06:50:29 AM
#26
ok looked at yr logs and I can only see one possibility atm if you're running the 4diff patch...

source.local.1.blockmonitor.maxPollInterval=20

this is a constant 50 requests/sec.  Which is loading yr bitcoind as well as the pool... This should be fairly trivial to reasonable hardware though. 

What is the spec of yr server and what else is running on it?
sr. member
Activity: 266
Merit: 254
September 22, 2011, 05:37:19 AM
#25
Is it very likely  then, that it is caused by the server performance overall?

I doubt it, unless you're running on very limited hardware.  70 LP's should be taking 500ms *max* unless you're running on a pocket calculator with dialup.

perhaps you should try collecting some logs and send to me as described in this post: https://bitcointalksearch.org/topic/m.538639
member
Activity: 118
Merit: 10
BTCServ Operator
September 22, 2011, 05:15:30 AM
#24
I actually have that patch running. Takes > 5 secs to dispatch the LP responses, though on a ~20 GHash/s load with 70 workers. I didn't even notice the incoming rate > 500/s ever, even after restart and big max cache.

Is it very likely  then, that it is caused by the server performance overall?
sr. member
Activity: 266
Merit: 254
September 21, 2011, 08:30:15 PM
#23
Hey shadders,

I have the following problem:

Some users experience a couple of rejected shares after each LP. This is what I can see in debug mode. You have an explanation or maybe even a solution for me here?

Thanks for your help

Quote
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@89e2f1@REDISPATCHED,resumed
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@3228a1@REDISPATCHED,resumed

I could be wrong but I think those log messages are a seperate and possibly unrelated thing.  Actually they shouldn't be a problem, what I think it is is that the request was previously suspended and resumed by the QoS filter before it reached the LP servlet.  I didn't take that into account and expected any LP request should arrive in an initial state.  I'll log it as a bug.   A workaround for now would be disable the QoS filter.  Actually I'd be interested to see if that makes a difference.

As for the rejected shares, the ideal sequence of events is something like:
1/ worker gets work
2/ worker submits share
3/ worker gets work
4/ psj detects new block
5/ psj collects fresh work from bitcoind
6/ psj sends LP response
7/ worker receives LP response
8/ worker submits share

The time between 4 and 7 should be very minimal but if worker happens to submit share in that space it will be rejected.  This should be no more than 1-2 seconds on a busy server provided the bitcoind is patched and able to feed new work to the poolserver fast enough.

If it's not some other fault then something is probably causing that gap to be higher than it should... watch the log during a block change with debug=true.  You should see when the new block is detected and it should also tell when all LP responses have been dispatched (and how long it took).  Anything longer a 1000ms is not ideal.  Large pools are able to push several thousand in under a second with psj.  A likely candidate for slowing this down is an unpatched bitcoind.  please see https://bitcointalksearch.org/topic/m.384157 if you don't have the multithreaded rpc patch.
member
Activity: 118
Merit: 10
BTCServ Operator
September 21, 2011, 04:23:52 PM
#22
Hey shadders,

I have the following problem:

Some users experience a couple of rejected shares after each LP. This is what I can see in debug mode. You have an explanation or maybe even a solution for me here?

Thanks for your help

Quote
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@89e2f1@REDISPATCHED,resumed
LP continuation reached LP servlet but is not in 'initial' state: AsyncContinuation@3228a1@REDISPATCHED,resumed
sr. member
Activity: 266
Merit: 254
September 20, 2011, 05:58:33 PM
#21
I am currently 'testing' the latest 0.3.0rc1. The first thing i notice is the CPU usage. I am running in pushpoold compatability. So why this CPU usage? Pushpoold rarely used any CPU but PJS uses one full core.

problem is most likely default cache size set too high.  see: source.local.1.maxCacheSize and source.local.1.cacheWaitTimeout

also: http://poolserverj.org/documentation/performance-memory-tuning/

I'm going to change the default settings in the next release to be more suitable for a small pool as I've had this default setup for some extreme load tests and I keep getting this question.  Suitable for a small pool is probably the most sane default since most people will fire it up in a low usage test environment first.  I figure anyone evaluating psj for a pool larger than a couple of hundred GH is more likely to read the documentation and make the high performance adjustments needed..

Quote
The second thing i notice is that PSJ inserts shares quite a bit slower than pushpoold. And what it affects is the speed counted on my frontend. I am currently getting 40mh/s less detected in the frontend thereby people WILL lose coins.

shares are logged slower to the database by design.

shares.maxEntrysToQueueBeforeCommit=5000

shares.maxEntryAgeBeforeCommit=10

you can effectively disable this delayed writing by setting these to 0.

You are not getting any less Hashes on your pool.  If anything you should be getting slightly more.  I'd lay money that the problem is with reporting.  The work is still being dished out, hashed, returned and if valid submitted to the bitcoind so nothing is going missing.  The timestamps are added when the work is submitted by the worker not when it's written to the database.  Is your database using timestamps with a CURRENT_TIMESTAMP default value by any chance?

legendary
Activity: 1862
Merit: 1011
Reverse engineer from time to time
September 20, 2011, 02:41:57 PM
#20
I am currently 'testing' the latest 0.3.0rc1. The first thing i notice is the CPU usage. I am running in pushpoold compatability. So why this CPU usage? Pushpoold rarely used any CPU but PJS uses one full core.

The second thing i notice is that PSJ inserts shares quite a bit slower than pushpoold. And what it affects is the speed counted on my frontend. I am currently getting 40mh/s less detected in the frontend thereby people WILL lose coins.
sr. member
Activity: 266
Merit: 254
September 18, 2011, 08:00:44 PM
#19
Hi wtfman,

I assume you're running 0.2.9 then?  If so then those errors aren't a problem.  It's an expected exception and probably be logged.  I'm pretty sure it isn't unless you've got debug enabled.

However I really would recommend you get 0.3.0 working.  The longpolling has been rewritten from scratch which has given it huge performance boosts as well as fixing a of obscure errors.

Post some detail about the sql errors and I'll see if can help.  There should be no problem using a different share table name since you can specify the query in the properties file.  Post the query yr using, the table CREATE statement and any exceptions you're seeing in the log.
member
Activity: 118
Merit: 10
BTCServ Operator
September 18, 2011, 01:59:14 PM
#18
hey, I tried to use latest poolserverj in pushpool compatible mode but it didnt work for me. I have a share table that isnt called 'shares' and I also set it up in the properties file, but still it tried to insert data into 'shares'

I adjusted the db to your design, it works so far, but just now I found this in the stdout:

Code:
Flushed 61 shares to DB in 35.0ms (1742/sec)
Pausing Share logging due to block change.
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
java.lang.IllegalStateException: IDLE,initial
        at org.eclipse.jetty.server.AsyncContinuation.complete(AsyncContinuation.java:449)
        at com.shadworld.poolserver.BlockChainTracker$NotifyLongpollClientsThread.run(BlockChainTracker.java:178)
member
Activity: 118
Merit: 10
BTCServ Operator
September 12, 2011, 06:45:27 AM
#17
thanks!
sr. member
Activity: 266
Merit: 254
September 12, 2011, 06:32:54 AM
#16
Prepare Statement from Conf
Code:
### native - ensure usePushPoolCompatibleFormat=false
db.stmt.insertShare=INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution, time, source) VALUES (?, ?, ?, ?, ?, ?, ?, ?)


Now, there does seem the insert for block_num to be missing, but no idea where I should put it in. I have looked a bit at the source code but didnt get an exact info. Pls help

It's trying to write a null to a NOT NULL column... try

INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution, time, source, block_num, prev_block_hash) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ? ,?)

or alternately remove the NOT NULL restriction from the block_num and prev_block_hash columns.

The supplied sql scripts in 0.2.9 set those columns to NOT NULL which they probably shouldn't have.  Will be changed in the next release.

BTW Unless you've got a good reason for it I'd leave out the prev_block_hash column.
member
Activity: 118
Merit: 10
BTCServ Operator
September 12, 2011, 06:21:30 AM
#15
Hey, I am trying to test PoolServerJ but I have a problem with the db inserts.

Using v0.29a

Error:
Code:
work submit success, result: false
Submit Throttling on: false
Doing database flush for Shares: 1
Failed to commit to database.
java.sql.BatchUpdateException: Field 'block_num' doesn't have a default value

Query being executed when exception was thrown:
INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution, time, source) VALUES ('xx.xx.xx.xx', 'test', 1, 0, null, '0001000131dcbc30ad5ae6021834e79139879666f1e628de6c78e9e600007d8a000000003c7fef66abefaba3bd824e32ab44351b8f0186196944b909c198b96ee01fc77a4e6de5471b00b269cce138bf000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', '2011-09-12 11:09:38', 'namecoind-localhost')


        at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:2024)
        at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1449)
        at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
        at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
        at com.shadworld.poolserver.db.shares.DefaultPreparedStatementSharesDBFlushEngine.flushToDatabase(DefaultPreparedStatementSharesDBFlushEngine.java:111)
        at com.shadworld.poolserver.logging.ShareLoggingThread.run(ShareLoggingThread.java:156)
Caused by: java.sql.SQLException: Field 'block_num' doesn't have a default value

Query being executed when exception was thrown:
INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution, time, source) VALUES ('xx.xx.xx.xx', 'test', 1, 0, null, '0001000131dcbc30ad5ae6021834e79139879666f1e628de6c78e9e600007d8a000000003c7fef66abefaba3bd824e32ab44351b8f0186196944b909c198b96ee01fc77a4e6de5471b00b269cce138bf000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', '2011-09-12 11:09:38', 'namecoind-localhost')


        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2625)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2119)
        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2415)
        at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1976)
        ... 5 more
Flushed 1 shares to DB in 35.0ms (28/sec)



Prepare Statement from Conf
Code:
### native - ensure usePushPoolCompatibleFormat=false
db.stmt.insertShare=INSERT INTO shares (rem_host, username, our_result, upstream_result, reason, solution, time, source) VALUES (?, ?, ?, ?, ?, ?, ?, ?)


Now, there does seem the insert for block_num to be missing, but no idea where I should put it in. I have looked a bit at the source code but didnt get an exact info. Pls help
Pages:
Jump to: