Author

Topic: KanoPool kano.is lowest 0.9% fee 🐈 since 2014 - Worldwide - 2432 blocks - page 2239. (Read 5350045 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Is the worker count correct for the mining to address link?  Mine is currently showing 6 workers, but I really only have 5.  Maybe someone is being kind, except the hashrate looks correct.
The worker count is correct. If you issued a "restart" on cgminer et. al then the socket may still be around from the old instance. It's something I'm fixing now on cgminer now that I'm aware it happens.
hero member
Activity: 689
Merit: 501
Question. How to withdrawal the funds?
I've registred an user, mined for about two days. A block was found. Now it is matured.
I've entered one of my BTC adresses in Account Settings, but no withdrawal button. I may guess it is done manually at this stage of pool development. May I need to send a PM with my username?
newbie
Activity: 54
Merit: 0
Would be nice to have - min payout setting ( obviously must be higher than pool min payout  Grin), near dust transactions are disliked by small mines too Tongue
Mined with address for little time, then registered and I continued to use that address. Don't know if addiction of the founds is automated, keep an eye at the next block!

Cheers, great work  Wink
legendary
Activity: 1218
Merit: 1001
Is the worker count correct for the mining to address link?  Mine is currently showing 6 workers, but I really only have 5.  Maybe someone is being kind, except the hashrate looks correct.
legendary
Activity: 1610
Merit: 1000
And a mini bounce for further scalability improvements.

Just so people know what the improvements are, if you check what's gone into the latest ckpool code you'll see changes that correspond with the pool upgrades (doesn't get more open than that does it?).

The ckpool code was not remotely limiting, but being able to see the pool working live and profiling where the CPU is used makes for excellent instant development of whatever is currently the biggest CPU user. The main connection event handler was converted from poll to epoll reducing CPU usage of that to 1/10th and the share processing workqueue was broken up into many threads (proportional to the number of CPUs on the machine) to better distribute out share processing. We're now prepared for about 10x as many workers as we were yesterday, so keep em coming...
Nice work as always
Could you be so kind you or kano to share total mem usage of the pool at the momet?
thank you

hero member
Activity: 689
Merit: 501
https://blockchain.info/tx/7b47a5415f69a092f5d2c252ff6bc60f34c333e52ef04cdc24adf485f9db270d

Code:
[10:25:54]  blockchain only showing 1 block so that's a good sign :)
[10:26:03] yoouu pew pew pew!!!

QG
Nice. 107.41%. May the next be more quickly.
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
Nice work, I noticed the diff reset and hash go to 0 but came up pretty quick. Working well now. A question, if I am in Australia and I have several miners, would I be better to send them to a local ckpool proxy then send that to ck pool in regards to latency and reduced reject/stales?
If your internet bandwidth is often maxed out, especially upstream (such as ADSL which most Australians have) then there is a potential advantage to combining them with a local proxy. On the other hand if you're not maxing it out, you're better off leaving them all connected separately as every extra step in the chain adds its own form of latency. The server is on the west coast of USA though so it's about as close as you can get to Australia and still be in the states.
Got them on good fibre so all good. Thanks for clearing that up.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Nice work, I noticed the diff reset and hash go to 0 but came up pretty quick. Working well now. A question, if I am in Australia and I have several miners, would I be better to send them to a local ckpool proxy then send that to ck pool in regards to latency and reduced reject/stales?
If your internet bandwidth is often maxed out, especially upstream (such as ADSL which most Australians have) then there is a potential advantage to combining them with a local proxy. On the other hand if you're not maxing it out, you're better off leaving them all connected separately as every extra step in the chain adds its own form of latency. The server is on the west coast of USA though so it's about as close as you can get to Australia and still be in the states.
sr. member
Activity: 308
Merit: 250
Decentralize your hashing - p2pool - Norgz Pool
And a mini bounce for further scalability improvements.

Just so people know what the improvements are, if you check what's gone into the latest ckpool code you'll see changes that correspond with the pool upgrades (doesn't get more open than that does it?).

The ckpool code was not remotely limiting, but being able to see the pool working live and profiling where the CPU is used makes for excellent instant development of whatever is currently the biggest CPU user. The main connection event handler was converted from poll to epoll reducing CPU usage of that to 1/10th and the share processing workqueue was broken up into many threads (proportional to the number of CPUs on the machine) to better distribute out share processing. We're now prepared for about 10x as many workers as we were yesterday, so keep em coming...

Nice work, I noticed the diff reset and hash go to 0 but came up pretty quick. Working well now. A question, if I am in Australia and I have several miners, would I be better to send them to a local ckpool proxy then send that to ck pool in regards to latency and reduced reject/stales?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
And a mini bounce for further scalability improvements.

Just so people know what the improvements are, if you check what's gone into the latest ckpool code you'll see changes that correspond with the pool upgrades (doesn't get more open than that does it?).

The ckpool code was not remotely limiting, but being able to see the pool working live and profiling where the CPU is used makes for excellent instant development of whatever is currently the biggest CPU user. The main connection event handler was converted from poll to epoll reducing CPU usage of that to 1/10th and the share processing workqueue was broken up into many threads (proportional to the number of CPUs on the machine) to better distribute out share processing. We're now prepared for about 10x as many workers as we were yesterday, so keep em coming...
legendary
Activity: 4102
Merit: 7765
'The right to privacy matters'
Sorry, full restart took a little longer than planned. Up properly now.

seems to be working well.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Sorry, full restart took a little longer than planned. Up properly now.
legendary
Activity: 4102
Merit: 7765
'The right to privacy matters'
So I have been mining an s-3 for about 40 of the 80  hours it took to make the block.

So we wait for payouts about 10 hours?
sr. member
Activity: 351
Merit: 250
There'll be a little blip in mining as we upgrade the ckpool code. You may see a small burst of rejects and a redirect notice but you probably won't failover during the upgrade due to the way ckpool handles changeover, but if you do it won't last long.

I'm noticing it Shocked
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
There'll be a little blip in mining as we upgrade the ckpool code. You may see a small burst of rejects and a redirect notice but you probably won't failover during the upgrade due to the way ckpool handles changeover, but if you do it won't last long.
legendary
Activity: 1540
Merit: 1001
full member
Activity: 152
Merit: 100
member
Activity: 71
Merit: 10
We just found another. Smiley
legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
... and it confirmed shortly after that - so all good Smiley
legendary
Activity: 1218
Merit: 1001
Thank you sir, may I have another!
Jump to: