Author

Topic: KanoPool kano.is lowest 0.9% fee 🐈 since 2014 - Worldwide - 2432 blocks - page 171. (Read 5352633 times)

member
Activity: 658
Merit: 21
4 s9's 2 821's
gee...
We had a real quick block, but right now it seems we have a hard as a diamond again...
Is it me, or are we having bad luck lately?
last week only 3 blocks...  Huh

Law of averages. 
legendary
Activity: 1638
Merit: 1005
gee...
We had a real quick block, but right now it seems we have a hard as a diamond again...
Is it me, or are we having bad luck lately?
last week only 3 blocks...  Huh

This game required patience, look at the horizon, not your feet.
member
Activity: 434
Merit: 30
gee...
We had a real quick block, but right now it seems we have a hard as a diamond again...
Is it me, or are we having bad luck lately?
last week only 3 blocks...  Huh
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
my iPad App, which runs on a iPad 2 with IOS 9.3.1, is not showing my miners any longer.  This started happening a day or so ago.  It worked great for a month or so.  Over on the Kano postings I see others are reporting similar results.  Thoughts?
Not sure where anyone else has said his app has a problem in the last couple of days, but the person who does the iApp is here:
https://bitcointalksearch.org/topic/ckpool-stats-iphone-app-1344360
Post there and he should reply.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There was a failover on the JP node for about a minute at 05:09 UTC
It reconnected OK and seems all OK again ... so most mining on the JP node will have failed over to pool2 or pool3 for 5 minutes.
Though it looks like one user didn't fail back so if you're one of the few mining on JP you should check if you are the one who didn't fail back.
newbie
Activity: 103
Merit: 0
over 300 percent block out there waiting to be cracked.  Let's get this one...haha
Nah, we can let your other pool have that one.  Wink
I believe he was referring to the network block, not the pool.

(I could be wrong)

I'd be all-in for a 300% network block...lots of transactions.

Nope. I am talking about the pool. Just Joshin with him about it!!! Ha!!! Playful banter, no harm no foul.


Yes, I knew what Shazam was talking about....haha
newbie
Activity: 103
Merit: 0
over 300 percent block out there waiting to be cracked.  Let's get this one...haha
Nah, we can let your other pool have that one.  Wink
I believe he was referring to the network block, not the pool.

(I could be wrong)

I'd be all-in for a 300% network block...lots of transactions.

Yes, it was a 300 percent network block.  I should of been clearer.  Thanks

full member
Activity: 350
Merit: 158
#takeminingback
over 300 percent block out there waiting to be cracked.  Let's get this one...haha
Nah, we can let your other pool have that one.  Wink
I believe he was referring to the network block, not the pool.

(I could be wrong)

I'd be all-in for a 300% network block...lots of transactions.

Nope. I am talking about the pool. Just Joshin with him about it!!! Ha!!! Playful banter, no harm no foul.
jr. member
Activity: 104
Merit: 5
over 300 percent block out there waiting to be cracked.  Let's get this one...haha
Nah, we can let your other pool have that one.  Wink
I believe he was referring to the network block, not the pool.

(I could be wrong)

I'd be all-in for a 300% network block...lots of transactions.
yxt
legendary
Activity: 3528
Merit: 1116
ok thx, so just other freq...

I played around with undervolting last days.
I'll try to combine that.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
the heater I run Canaan A741 is 40db or less in my heating quiet mode

What options are you using for heating mode?
https://bitcointalksearch.org/topic/m.19893916
But I'm running it at --avalon7-freq 452 which is around 4.8THs so it's quieter than last year.
yxt
legendary
Activity: 3528
Merit: 1116
the heater I run Canaan A741 is 40db or less in my heating quiet mode

What options are you using for heating mode?
copper member
Activity: 232
Merit: 2
Kano any news of supporting dragonminers / ASICboost?
Hopefully get time to debug that crappy code in a few days.
The T1 is hell noisy loud even in quietest mode - minimum 62dB - so I'm a little limited on when I will test with it also.
(whereas the heater I run Canaan A741 is 40db or less in my heating quiet mode)
The web interface sux though, it's missing the two most important numbers DA and DR and the developer helping create a miner that you are locked out of sux also - open source my ass.
Clearly you can pay someone to do anything Tongue

For a couple of days, I've just been working on the after effects of moving to a new server - that's all done now for everything that matters.
However I've found a rather interesting event occurring on the server over the last few days that I need to sort out a solution to - the SPS often doubles to more than 7000 - works OK, but I need to sort out a straight forward fix I've realised to reduce that in case it doubles again Tongue

thank you, waiting with 20 dragonminers to join kano!
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Kano any news of supporting dragonminers / ASICboost?
Hopefully get time to debug that crappy code in a few days.
The T1 is hell noisy loud even in quietest mode - minimum 62dB - so I'm a little limited on when I will test with it also.
(whereas the heater I run Canaan A741 is 40db or less in my heating quiet mode)
The web interface sux though, it's missing the two most important numbers DA and DR and the developer helping create a miner that you are locked out of sux also - open source my ass.
Clearly you can pay someone to do anything Tongue

For a couple of days, I've just been working on the after effects of moving to a new server - that's all done now for everything that matters.
However I've found a rather interesting event occurring on the server over the last few days that I need to sort out a solution to - the SPS often doubles to more than 7000 - works OK, but I need to sort out a straight forward fix I've realised to reduce that in case it doubles again Tongue
jr. member
Activity: 284
Merit: 3
Kano any news of supporting dragonminers / ASICboost?

I think he tried it out on a test server, and found bugs nobody else knew about so he is avoiding implementing it for now.
copper member
Activity: 232
Merit: 2
Kano any news of supporting dragonminers / ASICboost?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.
So I tool advantage of this to test the cgminer failover setup--shenanigans, I tell ya Wink--which proved to be interesting to watch in real-time.

First, I set Pool 1 to UK and waited for its cgminer Status=Dead; however, I was surprised to see both 2 & 3 seemingly simultaneously go StratumActive=true and both of their GetWorks starting to increment...

This is when it got interesting.

Instead of Pool 2 (US) taking off and doing it's thing, Pool 3 seemed to win out, and Pool 2 gave up. After refreshing some more, it then switched to Pool 2, and alas, I eventually reconfigured my Pool 1 back to NYA.

This concludes my testing. Make of it what you will... Shenanigans. Wink
Yeah it's been an ongoing thing for literally years.
Dodgy software that a certain someone doesn't want to fix ... or maybe they can't Tongue
Thanks for doing a clear cut test of it to show it in detail!
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.

AWS has found a problem with the underlying LS hardware, so I just need to restart it and it will move off the problematic hardware.
The UK node restart was longer than anticipated due to the node losing it's IP address.
It took 18 minutes. It's all ok now.

My mistake Sad The default AWSLS is to have a dynamic IP, so any restart loses the IP address ... oh well my mistake I guess for not knowing that in advance - but they didn't mention it when they told me I had to restart it, or mention it anywhere on the Instance creation page: you just get an instance with an normal IP address.

Looks like I'm going to have to plan to restart all the nodes in the near future and change their IP addresses to static (a new IP).
I'll let everyone know when I'm going to go it, it may be a week or more though until I do and of course I'll stagger them so that only one node at a time will have a transient IP when I do them all.

The normal operation is to never restart them, alas when problems occur with the provider (AWS in this case) and they require me to restart a node, it will lose it's IP so there'll be another 10+ minutes of DNS change outage - so I'll plan to do that once for all nodes to switch to a static IP on restart, in the near future.

... and having said all that ... I realised the obvious Smiley

No I wont do it to any of the nodes unless they require restarting in the future.

So each node itself will only have the IP updated once IF it ever needs restarting - the nodes normally run non-stop forever, so until one requires a restart - and I'd do the usual restart warning about it - there's no need for me to actually update the IPs.
Mine on! Smiley
member
Activity: 490
Merit: 16
1xA921 + 1xA741 + Backup-->1xA6 ;)
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.
So I tool advantage of this to test the cgminer failover setup--shenanigans, I tell ya Wink--which proved to be interesting to watch in real-time.

First, I set Pool 1 to UK and waited for its cgminer Status=Dead; however, I was surprised to see both 2 & 3 seemingly simultaneously go StratumActive=true and both of their GetWorks starting to increment...

This is when it got interesting.

Instead of Pool 2 (US) taking off and doing it's thing, Pool 3 seemed to win out, and Pool 2 gave up. After refreshing some more, it then switched to Pool 2, and alas, I eventually reconfigured my Pool 1 back to NYA.

This concludes my testing. Make of it what you will... Shenanigans. Wink
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.

AWS has found a problem with the underlying LS hardware, so I just need to restart it and it will move off the problematic hardware.
The UK node restart was longer than anticipated due to the node losing it's IP address.
It took 18 minutes. It's all ok now.

My mistake Sad The default AWSLS is to have a dynamic IP, so any restart loses the IP address ... oh well my mistake I guess for not knowing that in advance - but they didn't mention it when they told me I had to restart it, or mention it anywhere on the Instance creation page: you just get an instance with an normal IP address.

Looks like I'm going to have to plan to restart all the nodes in the near future and change their IP addresses to static (a new IP).
I'll let everyone know when I'm going to go it, it may be a week or more though until I do and of course I'll stagger them so that only one node at a time will have a transient IP when I do them all.

The normal operation is to never restart them, alas when problems occur with the provider (AWS in this case) and they require me to restart a node, it will lose it's IP so there'll be another 10+ minutes of DNS change outage - so I'll plan to do that once for all nodes to switch to a static IP on restart, in the near future.
Jump to: