Author

Topic: KanoPool kano.is lowest 0.9% fee 🐈 since 2014 - Worldwide - 2432 blocks - page 1484. (Read 5352420 times)

sr. member
Activity: 546
Merit: 253
I am thinking that this orphan issue is just the tip of the iceberg - beyond this we have our extraordinary bad luck for many days now.

I suspect something has changed and this pool is getting locked out of blocks somehow.  The orphans may/may not be related.

This is exactly what happened at Slush before I moved here.  I'm having a deja vu.

did not want to say this but I did think it.


what is worse is this if this is true it   confirms it:


...


Until you spell our name correctly, you won't be able to understand why you always get orphaned. We haven't have any orphans for 7 weeks, or more than 1400 blocks.

Interesting wonder what kano thoughts are. I did notice slush's pool is killing blocks lately. Rigging the system seems difficult.
legendary
Activity: 4354
Merit: 9201
'The right to privacy matters'
I am thinking that this orphan issue is just the tip of the iceberg - beyond this we have our extraordinary bad luck for many days now.

I suspect something has changed and this pool is getting locked out of blocks somehow.  The orphans may/may not be related.

This is exactly what happened at Slush before I moved here.  I'm having a deja vu.

did not want to say this but I did think it.


what is worse is this if this is true it   confirms it:


...


Until you spell our name correctly, you won't be able to understand why you always get orphaned. We haven't have any orphans for 7 weeks, or more than 1400 blocks.


Now take a step back and look at this the leader of the largest pool said this.

  Forget everything about kano.is

 think overall what the claim is :     we can crush you  or orphan you   or waste you   ..


I know of a few ways they can do it.   

so to everyone here that is renting  from NH / WH   consider that

   f2pool can send dead hashers to   WH/NH  with ease   
   f2pool can mine on is own blocks  out of proper order.  they actually did this and caused a fork.
   f2pool  claims they   can always orphan us see above.

sr. member
Activity: 324
Merit: 260
So ... ... fuck, crap, &#$^*# *(#$&*(&$(* (*#&$(* &#*(&$ #( (*#&*( $&(*#, someone find something I can kill ... ... ... ...

OK, now for the details.

The CN node did indeed do it's work properly.
But that still wasn't good enough.

Every single relay shows our block before the f2pool block, even the CN relay.
Every single relay shows our block sent back to us before the f2pool block, even the CN relay.
... the us relays don't show us sending it to the relay coz the solo pool did that faster.

Our time to process the block was 280ms.

I've shortened all the block hashes below to 4 zeros and 4 hex since that's enough to see which is which.

00003e80 is 410418 BTCC in china - all pools worked on this block

*00002b07* is 410419 our block

-00001f67- is 410419 FUPool
00003007 is 410420 FUPool confirming their block

Code:
bjs [2016-05-05 23:35:17.374+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
bjs [2016-05-05 23:38:01.073+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
bjs [2016-05-05 23:38:01.209+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
bjs [2016-05-05 23:38:02.227+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
bjs [2016-05-05 23:44:40.138+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

jpy [2016-05-05 23:35:17.561+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
jpy [2016-05-05 23:38:01.122+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
jpy [2016-05-05 23:38:01.291+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
jpy [2016-05-05 23:38:08.164+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
jpy [2016-05-05 23:44:40.200+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

eu [2016-05-05 23:35:17.641+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
eu [2016-05-05 23:38:01.118+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
eu [2016-05-05 23:38:01.163+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
eu [2016-05-05 23:38:05.960+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
eu [2016-05-05 23:44:40.240+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

us-east [2016-05-05 23:35:17.741+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
us-east [2016-05-05 23:38:01.601+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
us-east [2016-05-05 23:38:08.310+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
us-east [2016-05-05 23:44:40.268+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

us-west [2016-05-05 23:35:17.741+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
us-west [2016-05-05 23:38:01.601+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
us-west [2016-05-05 23:38:08.310+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
us-west [2016-05-05 23:44:40.268+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

The winner was of course decided by the next block (00003007), but the problem is: why was FUPool not working on our block?

So basically the cause was one or more of the following:
FUPool is REAL slow getting their blocks to the CN relay.
FUPool works on their own blocks after they see other pool blocks if they are found soon after.
FUPool had most of their 410419 block transactions not in the relay so was really slow getting their block into the relay (compression was almost zero as you can see) Edit: but I don't think this is it coz it was prolly the relay did that coz it already had our block using the transactions.

Until you spell our name correctly, you won't be able to understand why you always get orphaned. We haven't have any orphans for 7 weeks, or more than 1400 blocks.
legendary
Activity: 1590
Merit: 1002
I am thinking that this orphan issue is just the tip of the iceberg - beyond this we have our extraordinary bad luck for many days now.

I suspect something has changed and this pool is getting locked out of blocks somehow.  The orphans may/may not be related.

This is exactly what happened at Slush before I moved here.  I'm having a deja vu.
legendary
Activity: 4354
Merit: 9201
'The right to privacy matters'

So basically the cause was one or more of the following:
FUPool works on their own blocks after they see other pool blocks if they are found soon after.

Wow, I would call this an exploit. It would gave them the ability to mine until the next block is found by the network praying they will get the next one to screw the orphan race they lost for the previous one.

I hope this is not the case


that's the one that is truly gaming the system.  not sure they did it but if it were to happen over and over and over it is flat out wrong.
legendary
Activity: 1638
Merit: 1005

So basically the cause was one or more of the following:
FUPool works on their own blocks after they see other pool blocks if they are found soon after.

Wow, I would call this an exploit. It would gave them the ability to mine until the next block is found by the network praying they will get the next one to screw the orphan race they lost for the previous one.

I hope this is not the case

EDIT: Indeed the bigger you are, the chance you have to been able to use the "exploit" increase greatly.
hero member
Activity: 1610
Merit: 538
I'm in BTC XTC
I agree with the argument that pools shouldn't exceed 10% of the network. But in situations like this, the argument can be made that a large pool outside of cn is needed.
Most heartily fucking agree, and THIS is one of the pools that should do it IMHO.  Though I will always defer to the opinions and decisions that -ck and kano-san deem appropriate.  It's their pool after all, and as with anything in life, "love it, or leave it!"

Our benevolent dictators, huzzah!
legendary
Activity: 1834
Merit: 1080
---- winter*juvia -----
Huh.... rubbing my eyes.... waking up to RED dashboard this morning.... 2 orphans.... Fish...

DEFCON1 !!!!!

I am sure kano and ck have something up their sleeves to counter this.... I want to elaborate further but I am sure the mac guy can see our posts.....

I leave it to the good hands of kano and ck --- they are probably midway crafting the next move.... 1, 2, 3, 4, 5, 6, 7.....
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Well, I will add what some may see as an unexpected follow up comment.

This does actually mean we should expect to see fewer orphans.
The times shown on the non-GFW relays were quite a long time after the GFW relay (bjs)
So it does mean our blocks do get into the GFW faster.

... Just it didn't help for this orphan due to the short timing.
Without the CN node, this still would have happened the same, and not having the GFW relay info would have had me ranting even more about it, since the non-GFW relay numbers are shocking to say the least ... FUPool block 4-7 seconds after ours.
sr. member
Activity: 546
Merit: 253
I agree with the argument that pools shouldn't exceed 10% of the network. But in situations like this, the argument can be made that a large pool outside of cn is needed.
legendary
Activity: 4354
Merit: 9201
'The right to privacy matters'
So ... ... fuck, crap, &#$^*# *(#$&*(&$(* (*#&$(* &#*(&$ #( (*#&*( $&(*#, someone find something I can kill ... ... ... ...

OK, now for the details.

The CN node did indeed do it's work properly.
But that still wasn't good enough.

Every single relay shows our block before the f2pool block, even the CN relay.
Every single relay shows our block sent back to us before the f2pool block, even the CN relay.
... the us relays don't show us sending it to the relay coz the solo pool did that faster.

Our time to process the block was 280ms.

I've shortened all the block hashes below to 4 zeros and 4 hex since that's enough to see which is which.

00003e80 is 410418 BTCC in china - all pools worked on this block

*00002b07* is 410419 our block

-00001f67- is 410419 FUPool
00003007 is 410420 FUPool confirming their block
Code:
I deleted this

The winner was of course decided by the next block (00003007), but the problem is: why was FUPool not working on our block?

So basically the cause was one or more of the following:
FUPool is REAL slow getting their blocks to the CN relay.
FUPool works on their own blocks after they see other pool blocks if they are found soon after.
FUPool had most of their 410419 block transactions not in the relay so was really slow getting their block into the relay (compression was almost zero as you can see)

This could kill BTC very dead very soon.  If they get much bigger they will do it more often then every hundred blocks or so.

 Oh well.  the bolded one I see as true cheating.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
So ... ... fuck, crap, &#$^*# *(#$&*(&$(* (*#&$(* &#*(&$ #( (*#&*( $&(*#, someone find something I can kill ... ... ... ...

OK, now for the details.

The CN node did indeed do it's work properly.
But that still wasn't good enough.

Every single relay shows our block before the f2pool block, even the CN relay.
Every single relay shows our block sent back to us before the f2pool block, even the CN relay.
... the us relays don't show us sending it to the relay coz the solo pool did that faster.

Our time to process the block was 280ms.

I've shortened all the block hashes below to 4 zeros and 4 hex since that's enough to see which is which.

00003e80 is 410418 BTCC in china - all pools worked on this block

*00002b07* is 410419 our block

-00001f67- is 410419 FUPool
00003007 is 410420 FUPool confirming their block

Code:
bjs [2016-05-05 23:35:17.374+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
bjs [2016-05-05 23:38:01.073+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
bjs [2016-05-05 23:38:01.209+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
bjs [2016-05-05 23:38:02.227+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
bjs [2016-05-05 23:44:40.138+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

jpy [2016-05-05 23:35:17.561+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
jpy [2016-05-05 23:38:01.122+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
jpy [2016-05-05 23:38:01.291+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
jpy [2016-05-05 23:38:08.164+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
jpy [2016-05-05 23:44:40.200+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

eu [2016-05-05 23:35:17.641+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
eu [2016-05-05 23:38:01.118+00] *00002b07* sent, size 985717 with 37638 bytes on the wire
eu [2016-05-05 23:38:01.163+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
eu [2016-05-05 23:38:05.960+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
eu [2016-05-05 23:44:40.240+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

us-east [2016-05-05 23:35:17.741+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
us-east [2016-05-05 23:38:01.601+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
us-east [2016-05-05 23:38:08.310+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
us-east [2016-05-05 23:44:40.268+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

us-west [2016-05-05 23:35:17.741+00] 00003e80 recv'd, size 989168 with 6717 bytes on the wire
us-west [2016-05-05 23:38:01.601+00] *00002b07* recv'd, size 985693 with 37546 bytes on the wire
us-west [2016-05-05 23:38:08.310+00] -00001f67- recv'd, size 989649 with 935353 bytes on the wire
us-west [2016-05-05 23:44:40.268+00] 00003007 recv'd, size 755724 with 4191 bytes on the wire

The winner was of course decided by the next block (00003007), but the problem is: why was FUPool not working on our block?

So basically the cause was one or more of the following:
FUPool is REAL slow getting their blocks to the CN relay.
FUPool works on their own blocks after they see other pool blocks if they are found soon after.
FUPool had most of their 410419 block transactions not in the relay so was really slow getting their block into the relay (compression was almost zero as you can see) Edit: but I don't think this is it coz it was prolly the relay did that coz it already had our block using the transactions.
member
Activity: 62
Merit: 10
legendary
Activity: 1638
Merit: 1005
2 orpahns  Cry what is happening. So unlucky.

This is second time I recall we lost an orphan race since the pool that was again us found the next block in the chain.

This is pure badluck.
legendary
Activity: 1726
Merit: 1018
These orphans are arguments against block size increases.  Larger blocks will mean slower propagation which means the chinese pools will actually do this even more than they already do (as long as they continue to monopolize the network hashrate the way they do now anyway).
legendary
Activity: 1484
Merit: 1004
2 orpahns  Cry what is happening. So unlucky.
sr. member
Activity: 261
Merit: 250
Kano, how close was this one?

sr. member
Activity: 276
Merit: 250
Blockchain has it as Kano, Blocktrail as DiscusFish / F2Pool Smiley

Since f2pool hit the following block they win...
Arrgh! Angry
To self: Ok...practice what I preach....must have patience.
hero member
Activity: 770
Merit: 523
Jump to: