Sounds reasonable shads. Thanks for clearing some things up.
My biggest hurdle with PSJ right now is that it has no support for non-static SQL schemas... not that my schema is dynamic, but it is not a pushpool schema either (and of course not the PSJ schema). I have lots of internal tricks going on inside the getwork server to handle DGM and I would have to make those same changes to PSJ it would seem... I would really like like to be able to make changes to the SQL without having to muck about in the PSJ sourcecode... that is honestly the biggest issue that is stopping me from trialing PSJ.
If that's the case the you might be interested in the new column mappings that are used in the latest version. Previously it was built in a way such that if you wanted data field 9 included in your query you had to 1-8. Which was very restrictive and inneficient if you happened to only need columns 1,2 and 9. for example.
Now you can provide a mapping string which allows you to pick and choose data fields and map them to a position in your query. This has opened up the possibility of adding any number of additional data fields and dynamic calculuations. The current 'menu' includes:
### 1 - remote_host - TEXT or VARCHAR
### 2 - username - TEXT or VARCHAR
### 3 - our_result = BOOLEAN or INT(1), if pushpoolCompatibility mode: ENUM(Y,N) or TEXT or VARCHAR
### 4 - upstream_result = BOOLEAN or INT(1), if pushpoolCompatibility mode: ENUM(Y,N) or TEXT or VARCHAR
### 5 - reason - TEXT or VARCHAR
### 6 - solution - TEXT or VARCHAR (length 257)
### 7 - time - TIMESTAMP
### 8 - source - TEXT or VARCHAR
### 9 - block_num - INT or TEXT or VARCHAR
### 10 - prev_block_hash - TEXT or VARCHAR(65) - this is just solution.substring(8, 72) - may be useful for indexing
### 11 - useragent TEXT or VARCHAR
### 12 - unique_part TEXT or VARCHAR(88) - the part of solution that's unique in the block: merkleroot, time, difficulty, nonce
### 13 - nonce TEXT or VARCHAR(8) - nonce in hex
### 14 - hash TEXT or VARCHAR(64)
### 15 - unix_time BIGINTEGER
###
### 15 - 50 reserved
As well as per chain our_result_
. You can expect this to expand considerably in the near future.
adding new ones is now trivial. e.g. there's a number of stats that have been tracked internally by workers for some time, these can now be exposed to the DB engine easily.
Obviously if you want to do additional calculations for DGM inside the server you can't really avoid building from source. Although it's no longer as daunting as it once was. I published this step-by-step to getting a build environment ready a couple of weeks ago: http://poolserverj.org/documentation/guide-to-setting-up-poolserverj-in-eclipse-3-7/
I don't know a lot about DGM aside from what it stands for but if you can give a rundown on what you need to do I can probably tell you where the best place in the code would be to achieve it. The DB API itself has an awful lot of data exposed to it. In most cases it's a single method that needs modifying.