Pages:
Author

Topic: PoolServerJ - Tech Support - page 6. (Read 27532 times)

sr. member
Activity: 266
Merit: 254
November 01, 2011, 10:30:42 AM
#54
block_num column in sample db script is set to 'not null'... change it and it should be fine..
sr. member
Activity: 403
Merit: 250
November 01, 2011, 10:28:29 AM
#53
I'm going to throw that in soon, i restarted PSJ and now I'm getting this error...

Quote
Failed to commit to database.
java.sql.BatchUpdateException: Field 'block_num' doesn't have a default value

Query being executed when exception was thrown:
INSERT INTO pool_shares (rem_host, username, our_result, upstream_result, reason, solution, time) VALUES ('213.112.59.222', 'Ly5pv6', 1, 0, null, '00000001763915294e20 .... 000000000080020000', UNIX_TIMESTAMP('2011-11-01 16:26:22'))



        at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:2024)
        at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1449)
        at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
        at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
        at com.shadworld.poolserver.db.shares.DefaultPreparedStatementSharesDBFlushEngine. flushToDatabase(DefaultPreparedStatementSharesDBFlushEngine.java:127)
        at com.shadworld.poolserver.logging.ShareLoggingThread.run(ShareLoggingThread.java:156)

Any ideas?

My config (snippet):

Quote
usePushPoolCompatibleFormat=false

...

db.stmt.insertShare=INSERT INTO pool_shares (rem_host, username, our_result, upstream_result, reason, solution, time) VALUES (?, ?, ?, ?, ?, ?, UNIX_TIMESTAMP(?))
sr. member
Activity: 266
Merit: 254
November 01, 2011, 10:23:54 AM
#52
Hmm, strange.. noticed the JDBC-url which contains autoReconnect=true.
I'm having issues with writing shares to the db so... Get the errormessage above in the log.

aaahh when I had that issue previously it was with the workerSql connection not the shares one so whatever I changed may have only affected that.  Add that same line but where I said 'workerSql' use 'sharesSql'.

sr. member
Activity: 266
Merit: 254
November 01, 2011, 10:21:36 AM
#51
I could have sworn I'd set that because I've seen this problem before.  I've just changed it as a default setting which will require a full release since it's buried in once of the support libs... Since yr obviously building from source you can set it in the Conf class.

Add the line:
workerSql.getJdbcOptionMap().put("autoReconnect", "true");

just after:
workerSql = new MySql(whost, wport == null ? "3306" : String.valueOf(wport), wschema, wuser, wpassword);

sr. member
Activity: 403
Merit: 250
November 01, 2011, 10:20:41 AM
#50
Hmm, strange.. noticed the JDBC-url which contains autoReconnect=true.
I'm having issues with writing shares to the db so... Get the errormessage above in the log.
sr. member
Activity: 403
Merit: 250
November 01, 2011, 10:14:27 AM
#49
If the connection between MySQL and poolserverj goes down (timeout and/or other reasons), the server having issues with doing reconnects, please read the following error message.

Quote
java.sql.BatchUpdateException: The last packet successfully received from the server was 4,664,997 milliseconds ago.  The last packet sent successfully to the server was 1,950,645 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.

How do i set autoReconnect=true?

I can increase the timeout on the serverside, but feels kinda non-optional.

/ Jim


EDIT:
This is for the non-MM version.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
October 25, 2011, 08:25:48 AM
#48
MM edition .06 is now on the downloads page.  This is first version of MM that I'm reasonably happy with.

It should now be sending longpolls if either block changes and the server is now actively monitoring the state of all block chains..  Issues with high CPU load caused by PSJ rejecting valid work should also be resolved.

If you are going to try it I'd appreciate if you could set the following:
debug=true
trace=true
traceTargets=blockmon

If anything odd happens this should give me some useful info to play with.


100 GHs and looking good!
I have everything set except:
traceTargets=blockmon


I have a good feeling about this one.
full member
Activity: 142
Merit: 100
October 25, 2011, 05:18:59 AM
#47
Running very smooth now! Great work! Even lp is happening quite often, seems to work out.

I'll report back after some more hours on load.
member
Activity: 118
Merit: 10
BTCServ Operator
October 25, 2011, 02:55:35 AM
#46
MM edition .06 is now on the downloads page.  This is first version of MM that I'm reasonably happy with.

It should now be sending longpolls if either block changes and the server is now actively monitoring the state of all block chains..  Issues with high CPU load caused by PSJ rejecting valid work should also be resolved.

If you are going to try it I'd appreciate if you could set the following:
debug=true
trace=true
traceTargets=blockmon

If anything odd happens this should give me some useful info to play with.


great, testing it right now
sr. member
Activity: 266
Merit: 254
October 25, 2011, 02:26:44 AM
#45
MM edition .06 is now on the downloads page.  This is first version of MM that I'm reasonably happy with.

It should now be sending longpolls if either block changes and the server is now actively monitoring the state of all block chains..  Issues with high CPU load caused by PSJ rejecting valid work should also be resolved.

If you are going to try it I'd appreciate if you could set the following:
debug=true
trace=true
traceTargets=blockmon

If anything odd happens this should give me some useful info to play with.
sr. member
Activity: 266
Merit: 254
October 24, 2011, 08:26:36 PM
#44
Thanks for the info Urstroyer...  I think this is the indicator I needed:

Code:
#
Incoming Rate: 3,387.53/sec
Incoming Fullfillment Rate: 100%
Outgoing Requested Rate: 29.5/sec

Somewhere in the chain it's discarding valid work and getting more from the daemon.  Not because of duplicates in this case.  One of the validation checks it does is to ensure the work is from the current block before sending out.  It seems having different sets of block numbers is causing a bit confusion.

I'm actually nearly ready to release the fix that should make longpolling work for both chains and I've replaced the blocknum field (which was just a long int) which a new BlockNumbers class that tracks blocks for all chains so that should be the end of it with a bit of luck.
full member
Activity: 142
Merit: 100
October 24, 2011, 11:58:40 AM
#43
hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.

I have exactly the same issue, poolserverj mm is running smooth at low cpu usage and finding both nmc and btc blocks until network block is found and lp happens.
Then bitcoind is under heavy cpu usage until i restart the poolserverj. We currently use a 4diff patched version of vinced bitcoind.


When you see this happen can you try checking http://localhost:8997/?method=getsourcestats and report back is Duplicate rate or Reject rate are above zero?

If not can you set debug=true trace=true traceTargets=merged and throw the output onto pastebin?


Running psj mm for an hour now with 35gh/s load an got some new test results.

I seems like the heavy cpu load (70-80%) on bitcoind happens CAN happen on every lp which was trigged from a new network block.

When this happens, i noticed that the field block_num in shares table gets the current NMC block number!

It also happend that on another network block and lp the cpu usage got back to normal 3-5% load and guess what! The field block_num in shares table got the BTC block number again on every share.

The situation can actually switch every on every lp.

Here are two snapshots of getsourcestats:
#### 1 minute after starting psj (heavy cpu load, nmc block_num in shares table) ####
http://pastebin.com/BMAx0vgG

#### 30 minute after starting psj (heavy cpu load, nmc block_num in shares table) ####
http://pastebin.com/zViA7K5b

#### Console on lp (normal cpu load to heavy cpu load) ####
http://pastebin.com/zhfQG1ES

I couldn't figure out what circumstances cause this effect, it seems pretty random to me. Maybe can help.
hero member
Activity: 780
Merit: 510
Bitcoin - helping to end bankster enslavement.
October 23, 2011, 09:20:46 PM
#42
hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.

I have exactly the same issue, poolserverj mm is running smooth at low cpu usage and finding both nmc and btc blocks until network block is found and lp happens.
Then bitcoind is under heavy cpu usage until i restart the poolserverj. We currently use a 4diff patched version of vinced bitcoind.

This also happens to me with mm-psj  one suggestion from shadders was to set forceAllSubmitsUpstream to false.

I have not tried it.  Also you should lower your work cache sizes big time.

I know you where upset with me in the chat room for not using/testing it but I had a working part of my pool and many other parts that where not working I had to prioritize them as people wanted me to expand into other countries.

Also I was hoping you would find the bug and issue a new version before I got a chance to retry your suggestions.
sr. member
Activity: 266
Merit: 254
October 23, 2011, 07:30:22 PM
#41
hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.

I have exactly the same issue, poolserverj mm is running smooth at low cpu usage and finding both nmc and btc blocks until network block is found and lp happens.
Then bitcoind is under heavy cpu usage until i restart the poolserverj. We currently use a 4diff patched version of vinced bitcoind.


When you see this happen can you try checking http://localhost:8997/?method=getsourcestats and report back is Duplicate rate or Reject rate are above zero?

If not can you set debug=true trace=true traceTargets=merged and throw the output onto pastebin?
sr. member
Activity: 266
Merit: 254
October 23, 2011, 07:21:57 PM
#40
I've started testing poolserverj at bitcoins.lc for handling larger loads better (Having issues with LP's against large amount of connections).
But before rolling out anything public, i really need to get rid of the DATETIME-fields in MySQL. Is that possible?

I'd like to have everything in GMT UNIX Timestamps. One "hackish" way would be to do the conversion in the statement, but I'd like actually make poolserverj to insert unix timestamp instead of having to do a TO_UNIXTIME(?) in the statment.

Even better would be to drop MySQL entirely and finally use a better scaling database (MongoDB/other No-sql DB) and let Mongo take care of timestamps by it's own, also let mongodb take care of replication and load balancing / sharding.

Any planned NoSQL-support?

Well before the advent of merged mining PSJ had exceptional longpoll performance but as you can see from the last few posts in this thread there's a few issues to be ironed out....

There is one good reason why you'd want to have timestamps set on the psj side rather than DB side.  Because psj caches shares and bulk writes them there can be a delay between when they came in and when the DB sees them.  PSJ timestamps the share as soon as it's received and uses this timestamp when writing to the db.  So if accurate share times are important to you that's something to consider.

I'm not really familiar with mongo or no-sql.  If they have JDBC drivers then adding support would be fairly trivial.  However it won't happen until mm is stabilised.  Dropping mysql support isn't likely sits it's most commonly used.

Having poolserver insert a timestamp directly should also be fairly trivial.  The internal representation is the same as a unix timestamp GMT but in millis instead of seconds.  If you're comfortable building from source it would only be a couple of lines needed modding in DefaultPreparedStatementSharesDBFlushEngine.  If you really want crazy performance and your share writes don't update existing rows have a look at the bulkloader engines in the source.

I presume what yr actually after is just an integer column?  If so have you tried just changing the column type to see if it works?  The code that sets is this:
stmt.setTimestamp(7, new Timestamp(entry.createTime));

And I have a feeling if the target column type is a BIGINT it will probably just convert it.

If not the change you'd need to make would be something like:
stmt.setLong(7, entry.createTime / 1000);
sr. member
Activity: 266
Merit: 254
October 23, 2011, 07:05:52 PM
#39
running mm-0.5 I stumble upon this error every now and then.

org.eclipse.jetty.io.RuntimeIOException: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.io.UncheckedPrintWriter.setError(UncheckedPrintWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:280)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:295)
        at java.io.PrintWriter.append(PrintWriter.java:977)
        at com.shadworld.poolserver.LongpollHandler.completeLongpoll(LongpollHandler.java:177)

This is a normal/expected exception.  It happens when psj tries to send a longpoll response but the client has silently dropped the connection.  In this case psj will recycle the work for the next LP connection in the queue.

The reason yr seeing it now and may not have before is that while psj-mm is in alpha I'm dumping a lot more events to the log so I can see better what's going on inside.  This particularly exception happens all the time with pre-mm version but isn't logged.
full member
Activity: 142
Merit: 100
October 23, 2011, 10:17:58 AM
#38
hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.

I have exactly the same issue, poolserverj mm is running smooth at low cpu usage and finding both nmc and btc blocks until network block is found and lp happens.
Then bitcoind is under heavy cpu usage until i restart the poolserverj. We currently use a 4diff patched version of vinced bitcoind.
newbie
Activity: 4
Merit: 0
October 23, 2011, 09:37:32 AM
#37
hey again.

there was even another issue with running mm-0.5. It worked great until first Long Poll, then CPU usage skyrocketed and didnt drop any more. After a couple of mins the work queue was emptied out so miners only received No Work available message and pool hash rate dropped to 0.0 Hash/s.

I have tried it before with only 2 miners in testnet and there was no problem. Problem occured with approximately 15Ghash/s and 60 workers.
sr. member
Activity: 403
Merit: 250
October 23, 2011, 08:56:34 AM
#36
I've started testing poolserverj at bitcoins.lc for handling larger loads better (Having issues with LP's against large amount of connections).
But before rolling out anything public, i really need to get rid of the DATETIME-fields in MySQL. Is that possible?

I'd like to have everything in GMT UNIX Timestamps. One "hackish" way would be to do the conversion in the statement, but I'd like actually make poolserverj to insert unix timestamp instead of having to do a TO_UNIXTIME(?) in the statment.

Even better would be to drop MySQL entirely and finally use a better scaling database (MongoDB/other No-sql DB) and let Mongo take care of timestamps by it's own, also let mongodb take care of replication and load balancing / sharding.

Any planned NoSQL-support?
member
Activity: 118
Merit: 10
BTCServ Operator
October 23, 2011, 06:36:04 AM
#35
running mm-0.5 I stumble upon this error every now and then.

org.eclipse.jetty.io.RuntimeIOException: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.io.UncheckedPrintWriter.setError(UncheckedPrintWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:280)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:295)
        at java.io.PrintWriter.append(PrintWriter.java:977)
        at com.shadworld.poolserver.LongpollHandler.completeLongpoll(LongpollHandler.java:177)
        at com.shadworld.poolserver.LongpollHandler$LongpollTimeoutTask.call(LongpollHandler.java:340)
        at com.shadworld.poolserver.LongpollHandler$LongpollTimeoutTask.call(LongpollHandler.java:1)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:636)
Caused by: org.eclipse.jetty.io.EofException
        at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:911)
        at org.eclipse.jetty.http.AbstractGenerator.flush(AbstractGenerator.java:431)
        at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:89)
        at org.eclipse.jetty.server.HttpConnection$Output.flush(HttpConnection.java:1139)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:168)
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:96)
        at java.io.ByteArrayOutputStream.writeTo(ByteArrayOutputStream.java:126)
        at org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:283)
        at org.eclipse.jetty.server.HttpWriter.write(HttpWriter.java:107)
        at org.eclipse.jetty.io.UncheckedPrintWriter.write(UncheckedPrintWriter.java:271)
        ... 12 more
Caused by: java.io.IOException: Broken pipe
        at sun.nio.ch.FileDispatcher.writev0(Native Method)
        at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
        at sun.nio.ch.IOUtil.write(IOUtil.java:182)
        at sun.nio.ch.SocketChannelImpl.write0(SocketChannelImpl.java:383)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:406)
        at java.nio.channels.SocketChannel.write(SocketChannel.java:384)
        at org.eclipse.jetty.io.nio.ChannelEndPoint.gatheringFlush(ChannelEndPoint.java:347)
        at org.eclipse.jetty.io.nio.ChannelEndPoint.flush(ChannelEndPoint.java:285)
        at org.eclipse.jetty.io.nio.SelectChannelEndPoint.flush(SelectChannelEndPoint.java:259)
        at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:843)
        ... 21 more
Pages:
Jump to: