Pages:
Author

Topic: CKPOOL - Open source pool/proxy/passthrough/redirector/library in c for Linux (Read 123941 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Locking this thread as well I'm afraid. The era where this software mattered is over, and I've lost interest in supporting it any further.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
yes i did it from logs, but are you aware killing ckpool is dropped most of time ?


ckpool -k

doesn't kill the process, i tried various other combinations too
Buggy proxy code. Never got around to fixing it. Lost any interest thanks to other events.
jr. member
Activity: 198
Merit: 2
ckproxy -p runs fine !


what could the best option to see the proxy status in nice web UI ?
No such thing exists. You'd have to build your own, sorry.

yes i did it from logs, but are you aware killing ckpool is dropped most of time ?


ckpool -k

doesn't kill the process, i tried various other combinations too
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
ckproxy -p runs fine !


what could the best option to see the proxy status in nice web UI ?
No such thing exists. You'd have to build your own, sorry.
jr. member
Activity: 198
Merit: 2
About ckproxy

Ckproxy uses the same core code as ckpool and acts as a proxy with a simple switch (-p). It acts as a stratum-to-stratum proxy, or stratum-through-stratum proxy in passthrough mode (see below).

It is most useful to regular miners who wish to consolidate their mining hardware into one connection for minimal upstream/downstream bandwidth in standalone mode (with the additional -A switch). Ckproxy uses multiple modes to maintain as many miners as possible communicating with the upstream pool. It works by splitting up nonce2 sizes if the upstream pool's nonce2 is large enough and then it recruits extra upstream connections as needed to allow the number of downstream miners to scale indefinitely.

Ckproxy can be started in "userproxy" mode which monitors login names of miners attaching that don't match the master proxy username and recruits new upstream connections with each unique username, proxying all workers of the same username to the unique upstream proxy connections.

It can be configured to run with a database as per an actual pool entirely, thus acting like a child pool for a parent pool elsewhere, or simply for miners who wish full logging of every detail of their mining operation.

It can also act as a unique stratum-through-stratum passthrough mode which absolutely requires a ckpool as the parent pool with the -P switch. In this mode it does not decrease the bandwidth talking to the upstream pool but minimises the number of open connections instead, but requires extremely low resources to run. This mode is most useful for pools wishing to isolate their main pool instance from the outside world and set up multiple VPSs as a kind of front end proxy/firewall to the outside world. This also removes any realistic limit on the number of open connections since each passthrough instance could easily handle 10k connections, consolidating each into just 1 connection to the upstream pool.

In addition to the passthrough mode, there is a more advanced "node" mode which connects as a passthrough, but needs a local bitcoind connection of its own. It monitors all traffic and shares between the pool and miners, being able to display local hashrates and will submit any blocks found to the local bitcoind in addition to sending the shares to the upstream pool, circumventing the delay of block submission that would otherwise happen with remote nodes.

Finally ckproxy can be started in "redirector" mode which acts as a regular passthrough, but monitors share responses from the upstream pool, and once a valid share is recognised from the upstream pool it will redirect miners that support redirection (all cgminer based clients do) to the URL of your choice. Should the miners not support redirection, such as rental services, ckredirector will continue to act as an ordinary passthrough.


ckproxy -p runs fine !


what could the best option to see the proxy status in nice web UI ?
legendary
Activity: 2405
Merit: 1459
-> morgen, ist heute, schon gestern <-
How does your testminer setup looks like?
And what kind of miner did you use?

As far as I can see, the pool works as normal.
Only the tranfered shares of the miner get rejected,
You may look deeper in the log files of ckpool for debug of the tranfered data from the miner to ckpool.

Edit:
Quote
Client 7 rejecting for 120s
looks like the miner won't comunicate
jr. member
Activity: 61
Merit: 1
Code:
"btcaddress" : "ADDRESS",

did you got an valid btc address in, or did you leave the ADDRESS ?
It will only work with a proper btc address.

Yes. And my test miner is obviously not configured as “ADDRESS.worker.”
legendary
Activity: 2405
Merit: 1459
-> morgen, ist heute, schon gestern <-
Code:
"btcaddress" : "ADDRESS",

did you got an valid btc address in, or did you leave the ADDRESS ?
It will only work with a proper btc address.
jr. member
Activity: 61
Merit: 1
It seems I have the pool compiled and running with remote node and block notify, however it rejects all shares, and continuously reconnects my test miner. Is there something obvious I missed? The below is using ckpool-splns, but the result is the same with ckpool.

stdout
Code:
[2019-07-13 07:12:13.289] ckpool generator starting
[2019-07-13 07:12:13.293] ckpool stratifier starting
[2019-07-13 07:12:13.294] ckpool connector starting
[2019-07-13 07:12:13.294] ckpool connector ready
[2019-07-13 07:12:13.344] ckpool generator ready
[2019-07-13 07:12:13.344] Connected to bitcoind: 192.168.1.14:18333
[2019-07-13 07:12:13.364] ckpool stratifier ready
[2019-07-13 07:32:22.491] | 0.00H/s  0.0 SPS  1 users  1 workers

ckpool.conf
Code:
{
"btcd" :  [
    {
        "url" : "192.168.1.14:18333",
        "auth" : "user",
        "pass" : "pass",
        "notify" : true
    }
],
"btcaddress" : "ADDRESS",
"serverurl" : [
    "192.168.1.23:3333"
]
}

log sample
Code:
[2019-07-13 07:35:13.431] Pool:{"runtime": 1380, "lastupdate": 1563003313, "Users": 1, "Workers": 1, "Idle": 1, "Disconnected": 0}
[2019-07-13 07:35:13.431] Pool:{"diff": "0.0", "accepted": 0, "rejected": 7405568, "lns": 0.1, "herp": 0.1, "reward": 0.390625}
[2019-07-13 07:35:23.617] Client 7 rejecting for 120s, reconnecting
[2019-07-13 07:35:23.623] Dropped client 7 192.168.1.52 user ADDRESS worker ADDRESS.worker
[2019-07-13 07:35:23.629] Authorised client 8 worker ADDRESS.worker as user ADDRESS
[2019-07-13 07:35:38.642] Stored local workbase with 3 transactions
newbie
Activity: 7
Merit: 0
Too funny. I thought I had installed the 64bit version.   Burning the new .iso now...
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
/usr/bin/ld: i386:x86-64 architecture of input file `libckpool.a(sha256_sse4.A)' is incompatible with i386 output
Might be an intel 64 bit machine but it looks like you're trying to compile it for a 32 bit userspace. Ckpool is 64 bit only.



I'm relatively new to Linux.  Is there a way I can direct it to compile as 64 bit?  I just assumed it would do that somehow on its own.



By the way, I've been using your cgminer on another machine and it's been rock solid for a couple years.  Nice work.

It just means you've installed a 32 bit linux distribution. Make sure to use a 64 bit one. There is no reason to use 32 bit any more anywhere.
newbie
Activity: 7
Merit: 0
/usr/bin/ld: i386:x86-64 architecture of input file `libckpool.a(sha256_sse4.A)' is incompatible with i386 output
Might be an intel 64 bit machine but it looks like you're trying to compile it for a 32 bit userspace. Ckpool is 64 bit only.



I'm relatively new to Linux.  Is there a way I can direct it to compile as 64 bit?  I just assumed it would do that somehow on its own.



By the way, I've been using your cgminer on another machine and it's been rock solid for a couple years.  Nice work.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
/usr/bin/ld: i386:x86-64 architecture of input file `libckpool.a(sha256_sse4.A)' is incompatible with i386 output
Might be an intel 64 bit machine but it looks like you're trying to compile it for a 32 bit userspace. Ckpool is 64 bit only.
newbie
Activity: 7
Merit: 0
Could use a little help please.  I've been trying to compile/install ckpool, but it errors-out during 'make'.
I have tried on two separate machines with the same results.

Here's where it all ends:

make[3]: Entering directory '/home/v3/ckpool/ckpool/src'
  CC       libckpool.o
  CC       sha2.o
yasm -f x64 -f elf64 -X gnu -g dwarf2 -D LINUX -o sha256_code_release/sha256_sse4.A sha256_code_release/sha256_sse4.asm
  AR       libckpool.a
ar: `u' modifier ignored since `D' is the default (see `U')
  CC       ckpool.o
  CC       generator.o
  CC       bitcoin.o
  CC       stratifier.o
stratifier.c: In function ‘read_poolstats’:
stratifier.c:8512:17: warning: passing argument 1 of ‘json_get_int64’ from incompatible pointer type [-Wincompatible-pointer-types]
  json_get_int64(&last.tv_sec, val, "lastupdate");
                 ^
In file included from stratifier.c:23:0:
ckpool.h:369:6: note: expected ‘int64_t * {aka long long int *}’ but argument is of type ‘__time_t * {aka long int *}’
 bool json_get_int64(int64_t *store, const json_t *val, const char *res);
      ^~~~~~~~~~~~~~
  CC       connector.o
  CCLD     ckpool
/usr/bin/ld: i386:x86-64 architecture of input file `libckpool.a(sha256_sse4.A)' is incompatible with i386 output
collect2: error: ld returned 1 exit status
Makefile:502: recipe for target 'ckpool' failed
make[3]: *** [ckpool] Error 1
make[3]: Leaving directory '/home/v3/ckpool/ckpool/src'
Makefile:570: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory '/home/v3/ckpool/ckpool/src'
Makefile:410: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/v3/ckpool/ckpool'
Makefile:342: recipe for target 'all' failed
make: *** [all] Error 2


Can anyone please tell me what steps I should take to get this to work? Both machines are Intel 64 bit with Debian.
I'd be happy to donate some satoshi's for some help.  Thank you.

Oh, and I'm using the .zip file from:      https://bitbucket.org/ckolivas/ckpool/get/5d4dbe166c31.zip
member
Activity: 82
Merit: 11
Does ckpool also adjusts diff down, or in general, is that something that pools do ?
For example miner hashrate drops by 50% in a current session because a hasboard died.
Yes.

Great, thanks   Smiley
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Does ckpool also adjusts diff down, or in general, is that something that pools do ?
For example miner hashrate drops by 50% in a current session because a hasboard died.
Yes.
member
Activity: 82
Merit: 11
Hey -ck

thanks for your work and sharing of this great piece of pool software.

I played around with your ckpool to learn and better understand mining process.

Now one question (so far) comes up, why does the pool re-adjust the vardiff from scratch for every known client after reconnects / instance takeover (with your -H option) / ...
Doesn't it put alot of stress on a pool server with thousands of clients connected if , for example , you make a instance takeover and all thousand clients start to re-djust vardiff with x shares per second ?
Since you already have per worker information stored, isn't it better to store also last vardiff information and start from there on the next connection of the same worker ?

Thanks


That's a lot of data to hand over, the handover only hands over sockets, not the state of vardiff etc., and all of them will have to reconnect so all the previous connections will be invalid so you won't know who's on which socket after reconnect anyway. Worker names does not equal which socket they're connected to.

ok got it, thats for handover.
But why not serving previous difficulty after reconnect ?
Example:
1) Miner A with wokerName: MinerA  connected and mined for a while on stable diff.
2) pool stores this diff information per worker
3) Miner A disconnects and reconnects after few minutes
4) pool identifies miner by workerName -> gets old diff value from log and serves the stored diff instead of starting new vardiff ramping ?

Very likely that i oversee a detail, like i said -> Learning phase ..  so thanks for your answers Wink


One workername can have 10 thousand separate workers all with the same workername, but each with separate diffs.

Amen ... of course makes sense, worker name is no unique identifier. Thanks for that.

Does ckpool also adjusts diff down, or in general, is that something that pools do ?
For example miner hashrate drops by 50% in a current session because a hasboard died.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Hey -ck

thanks for your work and sharing of this great piece of pool software.

I played around with your ckpool to learn and better understand mining process.

Now one question (so far) comes up, why does the pool re-adjust the vardiff from scratch for every known client after reconnects / instance takeover (with your -H option) / ...
Doesn't it put alot of stress on a pool server with thousands of clients connected if , for example , you make a instance takeover and all thousand clients start to re-djust vardiff with x shares per second ?
Since you already have per worker information stored, isn't it better to store also last vardiff information and start from there on the next connection of the same worker ?

Thanks


That's a lot of data to hand over, the handover only hands over sockets, not the state of vardiff etc., and all of them will have to reconnect so all the previous connections will be invalid so you won't know who's on which socket after reconnect anyway. Worker names does not equal which socket they're connected to.

ok got it, thats for handover.
But why not serving previous difficulty after reconnect ?
Example:
1) Miner A with wokerName: MinerA  connected and mined for a while on stable diff.
2) pool stores this diff information per worker
3) Miner A disconnects and reconnects after few minutes
4) pool identifies miner by workerName -> gets old diff value from log and serves the stored diff instead of starting new vardiff ramping ?

Very likely that i oversee a detail, like i said -> Learning phase ..  so thanks for your answers Wink


One workername can have 10 thousand separate workers all with the same workername, but each with separate diffs.
member
Activity: 82
Merit: 11
Hey -ck

thanks for your work and sharing of this great piece of pool software.

I played around with your ckpool to learn and better understand mining process.

Now one question (so far) comes up, why does the pool re-adjust the vardiff from scratch for every known client after reconnects / instance takeover (with your -H option) / ...
Doesn't it put alot of stress on a pool server with thousands of clients connected if , for example , you make a instance takeover and all thousand clients start to re-djust vardiff with x shares per second ?
Since you already have per worker information stored, isn't it better to store also last vardiff information and start from there on the next connection of the same worker ?

Thanks


That's a lot of data to hand over, the handover only hands over sockets, not the state of vardiff etc., and all of them will have to reconnect so all the previous connections will be invalid so you won't know who's on which socket after reconnect anyway. Worker names does not equal which socket they're connected to.

ok got it, thats for handover.
But why not serving previous difficulty after reconnect ?
Example:
1) Miner A with wokerName: MinerA  connected and mined for a while on stable diff.
2) pool stores this diff information per worker
3) Miner A disconnects and reconnects after few minutes
4) pool identifies miner by workerName -> gets old diff value from log and serves the stored diff instead of starting new vardiff ramping ?

Very likely that i oversee a detail, like i said -> Learning phase ..  so thanks for your answers Wink

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Hey -ck

thanks for your work and sharing of this great piece of pool software.

I played around with your ckpool to learn and better understand mining process.

Now one question (so far) comes up, why does the pool re-adjust the vardiff from scratch for every known client after reconnects / instance takeover (with your -H option) / ...
Doesn't it put alot of stress on a pool server with thousands of clients connected if , for example , you make a instance takeover and all thousand clients start to re-djust vardiff with x shares per second ?
Since you already have per worker information stored, isn't it better to store also last vardiff information and start from there on the next connection of the same worker ?

Thanks


That's a lot of data to hand over, the handover only hands over sockets, not the state of vardiff etc., and all of them will have to reconnect so all the previous connections will be invalid so you won't know who's on which socket after reconnect anyway. Worker names does not equal which socket they're connected to.
Pages:
Jump to: