Author

Topic: KanoPool kano.is lowest 0.9% fee 🐈 since 2014 - Worldwide - 2432 blocks - page 171. (Read 5352229 times)

legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
the heater I run Canaan A741 is 40db or less in my heating quiet mode

What options are you using for heating mode?
https://bitcointalksearch.org/topic/m.19893916
But I'm running it at --avalon7-freq 452 which is around 4.8THs so it's quieter than last year.
yxt
legendary
Activity: 3528
Merit: 1116
the heater I run Canaan A741 is 40db or less in my heating quiet mode

What options are you using for heating mode?
copper member
Activity: 232
Merit: 2
Kano any news of supporting dragonminers / ASICboost?
Hopefully get time to debug that crappy code in a few days.
The T1 is hell noisy loud even in quietest mode - minimum 62dB - so I'm a little limited on when I will test with it also.
(whereas the heater I run Canaan A741 is 40db or less in my heating quiet mode)
The web interface sux though, it's missing the two most important numbers DA and DR and the developer helping create a miner that you are locked out of sux also - open source my ass.
Clearly you can pay someone to do anything Tongue

For a couple of days, I've just been working on the after effects of moving to a new server - that's all done now for everything that matters.
However I've found a rather interesting event occurring on the server over the last few days that I need to sort out a solution to - the SPS often doubles to more than 7000 - works OK, but I need to sort out a straight forward fix I've realised to reduce that in case it doubles again Tongue

thank you, waiting with 20 dragonminers to join kano!
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Kano any news of supporting dragonminers / ASICboost?
Hopefully get time to debug that crappy code in a few days.
The T1 is hell noisy loud even in quietest mode - minimum 62dB - so I'm a little limited on when I will test with it also.
(whereas the heater I run Canaan A741 is 40db or less in my heating quiet mode)
The web interface sux though, it's missing the two most important numbers DA and DR and the developer helping create a miner that you are locked out of sux also - open source my ass.
Clearly you can pay someone to do anything Tongue

For a couple of days, I've just been working on the after effects of moving to a new server - that's all done now for everything that matters.
However I've found a rather interesting event occurring on the server over the last few days that I need to sort out a solution to - the SPS often doubles to more than 7000 - works OK, but I need to sort out a straight forward fix I've realised to reduce that in case it doubles again Tongue
jr. member
Activity: 284
Merit: 3
Kano any news of supporting dragonminers / ASICboost?

I think he tried it out on a test server, and found bugs nobody else knew about so he is avoiding implementing it for now.
copper member
Activity: 232
Merit: 2
Kano any news of supporting dragonminers / ASICboost?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.
So I tool advantage of this to test the cgminer failover setup--shenanigans, I tell ya Wink--which proved to be interesting to watch in real-time.

First, I set Pool 1 to UK and waited for its cgminer Status=Dead; however, I was surprised to see both 2 & 3 seemingly simultaneously go StratumActive=true and both of their GetWorks starting to increment...

This is when it got interesting.

Instead of Pool 2 (US) taking off and doing it's thing, Pool 3 seemed to win out, and Pool 2 gave up. After refreshing some more, it then switched to Pool 2, and alas, I eventually reconfigured my Pool 1 back to NYA.

This concludes my testing. Make of it what you will... Shenanigans. Wink
Yeah it's been an ongoing thing for literally years.
Dodgy software that a certain someone doesn't want to fix ... or maybe they can't Tongue
Thanks for doing a clear cut test of it to show it in detail!
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.

AWS has found a problem with the underlying LS hardware, so I just need to restart it and it will move off the problematic hardware.
The UK node restart was longer than anticipated due to the node losing it's IP address.
It took 18 minutes. It's all ok now.

My mistake Sad The default AWSLS is to have a dynamic IP, so any restart loses the IP address ... oh well my mistake I guess for not knowing that in advance - but they didn't mention it when they told me I had to restart it, or mention it anywhere on the Instance creation page: you just get an instance with an normal IP address.

Looks like I'm going to have to plan to restart all the nodes in the near future and change their IP addresses to static (a new IP).
I'll let everyone know when I'm going to go it, it may be a week or more though until I do and of course I'll stagger them so that only one node at a time will have a transient IP when I do them all.

The normal operation is to never restart them, alas when problems occur with the provider (AWS in this case) and they require me to restart a node, it will lose it's IP so there'll be another 10+ minutes of DNS change outage - so I'll plan to do that once for all nodes to switch to a static IP on restart, in the near future.

... and having said all that ... I realised the obvious Smiley

No I wont do it to any of the nodes unless they require restarting in the future.

So each node itself will only have the IP updated once IF it ever needs restarting - the nodes normally run non-stop forever, so until one requires a restart - and I'd do the usual restart warning about it - there's no need for me to actually update the IPs.
Mine on! Smiley
member
Activity: 490
Merit: 16
1xA921 + 1xA741 + Backup-->1xA6 ;)
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.
So I tool advantage of this to test the cgminer failover setup--shenanigans, I tell ya Wink--which proved to be interesting to watch in real-time.

First, I set Pool 1 to UK and waited for its cgminer Status=Dead; however, I was surprised to see both 2 & 3 seemingly simultaneously go StratumActive=true and both of their GetWorks starting to increment...

This is when it got interesting.

Instead of Pool 2 (US) taking off and doing it's thing, Pool 3 seemed to win out, and Pool 2 gave up. After refreshing some more, it then switched to Pool 2, and alas, I eventually reconfigured my Pool 1 back to NYA.

This concludes my testing. Make of it what you will... Shenanigans. Wink
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.

AWS has found a problem with the underlying LS hardware, so I just need to restart it and it will move off the problematic hardware.
The UK node restart was longer than anticipated due to the node losing it's IP address.
It took 18 minutes. It's all ok now.

My mistake Sad The default AWSLS is to have a dynamic IP, so any restart loses the IP address ... oh well my mistake I guess for not knowing that in advance - but they didn't mention it when they told me I had to restart it, or mention it anywhere on the Instance creation page: you just get an instance with an normal IP address.

Looks like I'm going to have to plan to restart all the nodes in the near future and change their IP addresses to static (a new IP).
I'll let everyone know when I'm going to go it, it may be a week or more though until I do and of course I'll stagger them so that only one node at a time will have a transient IP when I do them all.

The normal operation is to never restart them, alas when problems occur with the provider (AWS in this case) and they require me to restart a node, it will lose it's IP so there'll be another 10+ minutes of DNS change outage - so I'll plan to do that once for all nodes to switch to a static IP on restart, in the near future.
member
Activity: 490
Merit: 16
1xA921 + 1xA741 + Backup-->1xA6 ;)
Unfortunately, I hit a snag with my farm move.

Won’t be back online until after the 8th Sad

Missing these blocks with 1PH hurts like hell now
Anybody heard from SmurfBerry?

Hope he's getting everything straightened out.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
There will be a very short failover - failback on the UK node in about 1 hour 02:00 UTC
This will ONLY be the UK node, no others.
It will be 5 minutes at the most.
If your pool2 points to another kano.is node then you should just failover to the 2nd node then back again.

AWS has found a problem with the underlying LS hardware, so I just need to restart it and it will move off the problematic hardware.
full member
Activity: 350
Merit: 158
#takeminingback


over 300 percent block out there waiting to be cracked.  Let's get this one...haha

Nah, we can let your other pool have that one.  Wink

I'm not really in another pool unless you count 10 gh...lol

 Roll Eyes
newbie
Activity: 103
Merit: 0


over 300 percent block out there waiting to be cracked.  Let's get this one...haha

Nah, we can let your other pool have that one.  Wink

I'm not really in another pool unless you count 10 gh...lol
full member
Activity: 350
Merit: 158
#takeminingback


over 300 percent block out there waiting to be cracked.  Let's get this one...haha

Nah, we can let your other pool have that one.  Wink
newbie
Activity: 103
Merit: 0


over 300 percent block out there waiting to be cracked.  Let's get this one...haha
newbie
Activity: 48
Merit: 0
I’m somewhat screwing myself on my electric bill right now with 2 of my miners being S7’s. I could replace them with 1 S9 and have more hash with less power consumption. S7 just aren’t worth anything  anymore for me to even feel like dealing with getting rid of them.

I have a storage locker with a bunch of outdated and inefficient miners (M3s, S7s, A742, etc) - if prices do go crazy again, they'll be sellable.

Not a bad plan. I may copy that.

If anyone wants to donate me an S7 + power supply I can give it a good home over here and put it to use.
member
Activity: 490
Merit: 16
1xA921 + 1xA741 + Backup-->1xA6 ;)
I have a storage locker with a bunch of outdated and inefficient miners (M3s, S7s, A742, etc) - if prices do go crazy again, they'll be sellable.
Not a bad plan. I may copy that.
Winter or the Moon, whichever comes first! Grin
member
Activity: 266
Merit: 13
I’m somewhat screwing myself on my electric bill right now with 2 of my miners being S7’s. I could replace them with 1 S9 and have more hash with less power consumption. S7 just aren’t worth anything  anymore for me to even feel like dealing with getting rid of them.

I have a storage locker with a bunch of outdated and inefficient miners (M3s, S7s, A742, etc) - if prices do go crazy again, they'll be sellable.

Not a bad plan. I may copy that.
legendary
Activity: 952
Merit: 1003
...
Best of luck with all that volcano action.

Stay safe!
Ah! Thank you for the kind thought. That's on Big Island (Hawai'i Island), several hundred miles from Kaua'i. Actually, the major portion of Big Island really isn't nonfunctional, but it ain't fun, either.

I've got friends (an MD with Red Cross and several who are Team Rubicon) whom have deployed to the Puna area. No one has really been injured by this event, though (well, one fellow...but we won't count that one...)

Mine on...  Kiss
Jump to: