Pages:
Author

Topic: CKPOOL - Open source pool/proxy/passthrough/redirector/library in c for Linux - page 19. (Read 124144 times)

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
There are 2 git folders html and pool
html is the web folder.
If you read index.php (which is only 1 line of code) it should be abundantly clear where pool goes Smiley

The web/db side of the pool is not designed for the non-technical, on purpose.

ok so i see that, looks to only work with db version. I wondered if it could be a way of seeing proxy stats. i'm guessing standalone proxy (-A -p) you wont get any stats other than the logs. i might have a go at getting the db set up.

Edit: ohhh of course i need to move pool to the web directory lol
No, the web based interface is connected to ckdb.
i.e. you need to run ckpool is full mode with ckdb to be able to get full web stats out of it.

You could use the worker.php and address.php I wrote that uses Con's solo pool stats interface.
However, very much by design, ckpool itself isn't a store of mining information, just as ckdb itself isn't a work generator for a mining pool.

The separation makes for a much more powerful pool.
CKPool's job is to generate work, hand it out to miners and report the work and hash rate that the miners do.
This removes all the overhead of data storage of a pool and means ckpool concentrates on being the best that it is at handling work.

CKDB deals with all the data storage and web interface needed and thus you need ckdb also if you want all that.
hero member
Activity: 672
Merit: 500
http://fuk.io - check it out!
u rock man, loving your work
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
There are 2 git folders html and pool
html is the web folder.
If you read index.php (which is only 1 line of code) it should be abundantly clear where pool goes Smiley

The web/db side of the pool is not designed for the non-technical, on purpose.

ok so i see that, looks to only work with db version. I wondered if it could be a way of seeing proxy stats. i'm guessing standalone proxy (-A -p) you wont get any stats other than the logs. i might have a go at getting the db set up.

Edit: ohhh of course i need to move pool to the web directory lol
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
There are 2 git folders html and pool
html is the web folder.
If you read index.php (which is only 1 line of code) it should be abundantly clear where pool goes Smiley

The web/db side of the pool is not designed for the non-technical, on purpose.
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
ok so firstly, my linux skills are not as good as my windows skills, i've compiled ckpool and bitcoind. i am trying to set up the web interface, i point apache to the http directory but get forbidden. apache works with default site so config is good. i have chmod 755 on both http and pool directories.

What am i missing?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Milestone 7 was recently tagged as a nice stable point in the code to use, although we have continued to develop in the interim.

New features are the addition of the maxdiff and maxclients options.

Stratum redirect can now be sent via ckpmsg on the console to a URL of choice instead of just a blank redirect. Reconnect by itself will just issue a reconnect but url and port can be added:
reconnect:url,port

A substantial amount of performance tuning was performed on the code now that I've seen it working on kano's ckpool allowing me to find the largest CPU users and concentrate scalability improvements there. I've fixed a number of upper client limit issues and now it should only really be limited by available ram and open file limits on the system. The main connector was rewritten to use epoll and extra threads based on number of CPUs are now recruited for share processing and stratum message processing workqueues. CPU usage of any one component on ckpool is currently less than 5% for 2500 clients, but even if the pool was completely CPU bound at 100% it should still perform fine. If we get to 50 thousand clients I'll be able to see where the next bottleneck is and concentrate further improvements there. In the meantime, more work needs to be done to ckdb to address its memory usage to be suitable to support ckpool's growth and that's what Kano is currently working on.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I thought I might explain a little of the CKDB share storage for anyone curious.
I was gonna post this in the pool thread, but of course realised, here is more appropriate.
(It relates to a change/fix I'm doing at the moment and thought I'd write out the current details at the same time)

CKDB has a 2 level structure for storing shares.
ShareSummary and MarkerSummary

ShareSummary is implemented.
MarkerSummary isn't yet - this simply means that I use a LOT more ram until I do implement it - and fast reporting at the 'Marker' level of course isn't possible until it's done.

CKPool creates work per WorkInfo which is effectively each block template we get from bitcoin.
We do this by default every 30s (faster than 'certain' other pools) since that is both good for Bitcoin and good for the pool.
For Bitcoin it means faster expected average transaction confirmation.
For the pool it means higher expected average transaction fees.
An obvious Win-Win for both.

CKDB groups shares, per WorkInfoID (the internal ID of the WorkInfo) and creates per worker ShareSummary records of these in RAM and in the DB.
You could compare this to what some other pools call "Shifts", with a ShareSummary record being a shift of work for a worker, of usually 30 seconds.
ShareSummary, per worker, per 30s, means a lot of ShareSummaries Smiley

The PPLNS code currently uses the ShareSummary information to calculate payouts, so the N value actually used for the payout will include the full 30s of work for the starting WorkInfoID calculated to be the start of the payout i.e. the N value used is expanded to the full first and last WorkInfoID - so, in concept, it is similar to a PPLNS "Shift" payout structure.

The MarkerSummary design allows any arbitrary range of WorkInfoIDs
The design is: to later to be able to determine the necessary WorkInfoID ranges to ensure all summary calculations required are possible, and then summarise the ShareSummaries into the MarkerSummaries and then be able to delete the ShareSummaries.
The original idea was that the Markers would be the block WorkInfoIDs and the PPLNS WorkInfoIDs and mean a much smaller amount of data required in RAM and in the database.
However, the timing of doing the summarisation to the Markers isn't possible until the end Marker is older than what is needed by other code.
For the current PPLNS used, that would mean: not until the end Marker is more than the N in PPLNS old.
i.e. all ShareSummaries would have to exist back to N share diff ago.

This actually runs pretty well on hashmine since most of the workers are high hash rate devices and have many high diff shares in each WorkInfo.
If the average number of shares per WorkInfo is low or the average difficulty of the shares is low, this of course leads to a much larger number of ShareSummaries - as is happening now on the kano.is pool.
Thus I've now decided to use the MarkerSummary to make larger "Shifts" that are both a subset of the original idea, i.e. more of them, but also at a size where it's OK to use for the PPLNS calculation - which I'll chose some value between 20 and 100 WorkInfoIDs (probably 50)
The MarkerSummary changes are pretty much next on the todo list.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I've updated the documentation to reflect the changes to the get_transactions, get_txnhashes and suggest_diff calls. The original documentation placed the parameter within the method which is not really meaningful use of json so ckpool supports the old form but expects the actual parameter within the params key.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Code:
[2014-10-27 20:05:53] ckproxy stratifier ready
[2014-10-27 20:06:03] Failed to receive line in auth_stratum
[2014-10-27 20:06:14] Failed to receive line in auth_stratum
[2014-10-27 20:06:24] Failed to receive line in auth_stratum
[2014-10-27 20:06:34] Failed to receive line in auth_stratum
[2014-10-27 20:06:45] Failed to receive line in auth_stratum
[2014-10-27 20:07:00] Failed to receive line in auth_stratum
[2014-10-27 20:07:11] Failed to receive line in auth_stratum

This is after a fresh update.
What config and upstream pool? Seems to be working fine here. Maybe the pool you're trying to proxy is actually dead?
full member
Activity: 195
Merit: 100
Mining since bitcoin was $1
Code:
[2014-10-27 20:05:53] ckproxy stratifier ready
[2014-10-27 20:06:03] Failed to receive line in auth_stratum
[2014-10-27 20:06:14] Failed to receive line in auth_stratum
[2014-10-27 20:06:24] Failed to receive line in auth_stratum
[2014-10-27 20:06:34] Failed to receive line in auth_stratum
[2014-10-27 20:06:45] Failed to receive line in auth_stratum
[2014-10-27 20:07:00] Failed to receive line in auth_stratum
[2014-10-27 20:07:11] Failed to receive line in auth_stratum

This is after a fresh update.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Fixed a bug which would previously crash ckpool if you had 1024 clients or more connected concurrently due to an expected system call limitation with select(), along with worker count being incorrect. It's now being used live on kano's ckpool with more than 1500 active workers.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
This just happened - but otherwise had been up and running for like 5 or 7 days without a problem.  I'll do a git pull to update - but in case this helps
Thanks, likely the same as the previous report of a trigger happy shutdown of the proxy which a git pull should fix.
full member
Activity: 195
Merit: 100
Mining since bitcoin was $1
This just happened - but otherwise had been up and running for like 5 or 7 days without a problem.  I'll do a git pull to update - but in case this helps
Code:
[2014-10-24 13:22:36] Failed to recv in read_socket_line
[2014-10-24 13:22:36] Failed to read_socket_line in proxy_recv, attempting reconnect
[2014-10-24 13:22:36] Upstream socket invalidated, will attempt failover
[2014-10-24 13:22:36] Killing proxy
[2014-10-24 13:22:36] Nonce2 length 3 means proxied clients can't be >5TH each
[2014-10-24 13:22:41] Failed to receive line in auth_stratum
[2014-10-24 13:22:41] Failed initial authorise to pool:port with user !
[2014-10-24 13:22:42] Connected to upstream server pool2:port as proxy
[2014-10-24 13:22:42] Successfully reconnected to pool2:port as proxy
[2014-10-24 13:22:42] Killing proxy
[2014-10-24 13:22:42] Nonce2 length 3 means proxied clients can't be >5TH each
[2014-10-24 13:22:42] JSON-RPC decode failed: (unknown reason)
[2014-10-24 13:22:42] Failed to get a json result in parse_subscribe, got: {"id": null, "method": "mining.set_difficulty", "params": [8192.0]}
[2014-10-24 13:22:42] Failed all subscription options in subscribe_stratum
[2014-10-24 13:22:43] generator process dead! Relaunching
[2014-10-24 13:22:43] File /tmp/ckproxy/generator.pid exists
[2014-10-24 13:22:43] ckproxy generator starting
[2014-10-24 13:22:43] ckproxy generator ready
[2014-10-24 13:22:43] Nonce2 length 3 means proxied clients can't be >5TH each
[2014-10-24 13:22:48] Failed to receive line in auth_stratum
[2014-10-24 13:22:48] Failed initial authorise to pool:port with user !
[2014-10-24 13:22:55] Connected to upstream server pool2:port as proxy
[2014-10-24 13:22:55] Attempting to send message getnotify to dead process generator with errno 3: No such process
[2014-10-24 13:22:55] Failure in send_recv_proc from stratifier.c update_notify:852 with errno 3: No such process
[2014-10-24 13:22:55] Failed to get notify from generator in update_notify
[2014-10-24 13:22:55] Attempting to send message getsubscribe to dead process generator with errno 3: No such process
[2014-10-24 13:22:55] Failure in send_recv_proc from stratifier.c update_subscribe:809 with errno 3: No such process
[2014-10-24 13:22:55] Failed to get subscribe from generator in update_notify
[2014-10-24 13:22:55] ckproxy stratifier exiting with return code 1, shutting down!
[2014-10-24 13:22:55] Listener received shutdown message, terminating ckpool
[2014-10-24 13:22:55] Failed to write 4 byte length in send_unix_msg (32) with errno 32: Broken pipe
[2014-10-24 13:22:55] Failure in send_unix_msg from ckpool.c listener:255 with errno 32: Broken pipe
[2014-10-24 13:22:55] Parent process ckproxy received signal 15, shutting down
sr. member
Activity: 476
Merit: 250
Bytecoin: 8VofSsbQvTd8YwAcxiCcxrqZ9MnGPjaAQm
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I keep getting shut down with this message:

[2014-10-22 00:02:03] Failure in send_proc from stratifier.c ssend_process:2661 with errno 3: No such process
[2014-10-22 00:02:03] Child process received signal 15, forwarding signal to ckpool main process
[2014-10-22 00:02:03] Parent process ckpool received signal 15, shutting down

I've run without a hitch 24/7 until now.  What did I do?
You probably had an upstream outage. The code was a little trigger happy at shutting itself down under those circumstances. Git pull the latest code.
sr. member
Activity: 476
Merit: 250
Bytecoin: 8VofSsbQvTd8YwAcxiCcxrqZ9MnGPjaAQm
I keep getting shut down with this message:

[2014-10-22 00:02:03] Failure in send_proc from stratifier.c ssend_process:2661 with errno 3: No such process
[2014-10-22 00:02:03] Child process received signal 15, forwarding signal to ckpool main process
[2014-10-22 00:02:03] Parent process ckpool received signal 15, shutting down

I've run without a hitch 24/7 until now.  What did I do?
sr. member
Activity: 476
Merit: 250
Bytecoin: 8VofSsbQvTd8YwAcxiCcxrqZ9MnGPjaAQm
Is there a way to tell ckpool to re-read its configuration file without a restart?

Apologies if this is answered somewhere; I looked through the FAQ and tried to search the whole thread.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
any ideas for these errors:

Code:
[2014-10-15 16:27:07] Failed to get a json result in parse_subscribe, got: {"id": null, "method": "mining.set_difficulty", "params": [16384.0]}
[2014-10-15 16:27:07] Failed all subscription options in subscribe_stratum
[2014-10-15 16:27:22] Nonce2 length 3 too small to be able to proxy

ckpoool -A -p
proxy setup with 4 pools (ghash, p2p, eligius and something else i forgot)
Looks like some pool responds with a command before it responds with the subscription, while another has a very small nonce2 (looks like f2pool or something).

The pool with a small nonce2 will likely only work with devices <5TH if I do this though.

I'll try and add workarounds for these in the code, thanks.

EDIT: Added nonce2 size 3 support with a warning.
full member
Activity: 195
Merit: 100
Mining since bitcoin was $1
any ideas for these errors:

Code:
[2014-10-15 16:27:07] Failed to get a json result in parse_subscribe, got: {"id": null, "method": "mining.set_difficulty", "params": [16384.0]}
[2014-10-15 16:27:07] Failed all subscription options in subscribe_stratum
[2014-10-15 16:27:22] Nonce2 length 3 too small to be able to proxy

ckpoool -A -p
proxy setup with 4 pools (ghash, p2p, eligius and something else i forgot)
member
Activity: 78
Merit: 10
Perfect, thanks.  I am running in a datacenter and I have a public and a private IP address.  I placed in my private IP address in the config file and I was able to reach it and solo mine on it via my public IP address.
Pages:
Jump to: