Pages:
Author

Topic: [~32 TH] HHTT - Selected Diff/Stratum/PPLNS/Paid Stales/High Availability/Tor - page 10. (Read 29800 times)

legendary
Activity: 1379
Merit: 1003
nec sine labore
It seems we are way better with orpahans, we still need a massive infusion of hashing power, though.

BTW, given that you're now using a faster virtual server, could you fireduck increase block size?

This would increase fees paid to block's solver which in the end should give people one more reason to join us.

spiccioli
sr. member
Activity: 392
Merit: 251
Good start to the swing.

Well, let's hope 1E8xj fixes his troubles fast... or we're down 2 TH

spiccioli
Yeah, looks like he was sending a bunch of h-not-zero (an odd error message I copied from the pushpoold).  It means the user was submitting shares that didn't match the assigned difficulty.  My guess is some software/firmware update did not go well.
legendary
Activity: 1379
Merit: 1003
nec sine labore
Good start to the swing.

Well, let's hope 1E8xj fixes his troubles fast... or we're down 2 TH

spiccioli
sr. member
Activity: 257
Merit: 250
Good start to the swing.
sr. member
Activity: 392
Merit: 251

Yeah, my GCE migration didn't fix everything.  I now added a bunch of logging to SockThing so that I can track down exactly what was happening or not happening.

I saw an instance where two of my nodes didn't see a new block for a over a minute and then eventually learned it.  So I took some steps to improve the p2p network and ensure that my nodes are connected at least to each other to increase coverage.
legendary
Activity: 1379
Merit: 1003
nec sine labore
sr. member
Activity: 257
Merit: 250
I think one of the causes of orphans is that my bitcoind instances are taking 5-10 seconds to do a getblocktemplate so we end up working on the old block longer than we should.

I've done some testing and this seems to be due to not good disk latency on AWS EC2 instances.  I've done testing with Google Compute Engine and numbers there are much better.

I've setup a test instance, us-central on GCE and am doing some production testing there.  If it goes well, I'll switch everyone over to GCE probably this week.


New nodes are up.  stratum.hhtt.1209k.com is pointing to them.

They are:
us-central-1.hhtt.1209k.com
us-central-2.hhtt.1209k.com
eu-west-1.hhtt.1209k.com


Extremely efficient pool operator that fireduck.
member
Activity: 61
Merit: 10
But we're still missing those 1.5 TH/s.

I'd like to know: why did he leave and where has he gone.

Talked with him again this past week. He no longer controls the hashpower is my understanding.
sr. member
Activity: 392
Merit: 251
I think one of the causes of orphans is that my bitcoind instances are taking 5-10 seconds to do a getblocktemplate so we end up working on the old block longer than we should.

I've done some testing and this seems to be due to not good disk latency on AWS EC2 instances.  I've done testing with Google Compute Engine and numbers there are much better.

I've setup a test instance, us-central on GCE and am doing some production testing there.  If it goes well, I'll switch everyone over to GCE probably this week.


New nodes are up.  stratum.hhtt.1209k.com is pointing to them.

They are:
us-central-1.hhtt.1209k.com
us-central-2.hhtt.1209k.com
eu-west-1.hhtt.1209k.com
sr. member
Activity: 392
Merit: 251
I think one of the causes of orphans is that my bitcoind instances are taking 5-10 seconds to do a getblocktemplate so we end up working on the old block longer than we should.

I've done some testing and this seems to be due to not good disk latency on AWS EC2 instances.  I've done testing with Google Compute Engine and numbers there are much better.

I've setup a test instance, us-central on GCE and am doing some production testing there.  If it goes well, I'll switch everyone over to GCE probably this week.


Are you running the latest bitcoind?  5-10 seconds is a very long time, even for the crappy performance you get with AWS.  GBT is very single-core CPU and HDD intensive, and AWS uses many-cores with relatively slow clock rates, and disk access speeds are always garbage.

Best specs you can get for GBT:  High clock speed modern CPU (E3-1230v2 is beastly for this since it has a 3.7 Ghz turbo, 3.3 baseline), and SSD.  Another alternative is storing the blockchain on a ramdisk, but that's only an option if you've got a dedicated machine with 32GB+ of RAM.

Yep, latest bitcoind.  Yeah, GBT seems to be entirely disk latency based.  I can't really afford the high memory machines for this project.

Anyways, with GCE I am getting 0.05 seconds for getblocktemplate which is quite an improvement for very similar price.
legendary
Activity: 1750
Merit: 1007
I think one of the causes of orphans is that my bitcoind instances are taking 5-10 seconds to do a getblocktemplate so we end up working on the old block longer than we should.

I've done some testing and this seems to be due to not good disk latency on AWS EC2 instances.  I've done testing with Google Compute Engine and numbers there are much better.

I've setup a test instance, us-central on GCE and am doing some production testing there.  If it goes well, I'll switch everyone over to GCE probably this week.


Are you running the latest bitcoind?  5-10 seconds is a very long time, even for the crappy performance you get with AWS.  GBT is very single-core CPU and HDD intensive, and AWS uses many-cores with relatively slow clock rates, and disk access speeds are always garbage.

Best specs you can get for GBT:  High clock speed modern CPU (E3-1230v2 is beastly for this since it has a 3.7 Ghz turbo, 3.3 baseline), and SSD.  Another alternative is storing the blockchain on a ramdisk, but that's only an option if you've got a dedicated machine with 32GB+ of RAM.
sr. member
Activity: 392
Merit: 251
I think one of the causes of orphans is that my bitcoind instances are taking 5-10 seconds to do a getblocktemplate so we end up working on the old block longer than we should.

I've done some testing and this seems to be due to not good disk latency on AWS EC2 instances.  I've done testing with Google Compute Engine and numbers there are much better.

I've setup a test instance, us-central on GCE and am doing some production testing there.  If it goes well, I'll switch everyone over to GCE probably this week.
legendary
Activity: 1078
Merit: 1005
Here's the discussion on my pool about the same same short block/orphan issue.
sr. member
Activity: 257
Merit: 250
It looks like Blockchain.info does not update orphaned blocks fast enough "20 SEP" is the last orphan , or it could be here https://blockchain.info/rejected
legendary
Activity: 1078
Merit: 1005
it has the same hour of a found block, just a few seconds later, so was it not even submitted to the bitcoin network?

Is it a real orpahan?
I have had the same on my pool which uses the same stratum software as HHTT. I wonder if there's an issue somewhere. I'd get a 'block' with 20 seconds of another which is never submitted to the network.
legendary
Activity: 1379
Merit: 1003
nec sine labore
Three blocks, but another "strange" orpahan, strange in that it is not found on blockchain.info


http://blockchain.info/search?search=00000000000000078b9fc485c97a58595752ef178dedb8ada97eb634212e21f4


it has the same hour of a found block, just a few seconds later, so was it not even submitted to the bitcoin network?

Is it a real orpahan?

spiccioli
legendary
Activity: 1379
Merit: 1003
nec sine labore

It seems we're talking about 6 TH/s which would double us (if they ever come) and make HHTT a little less the size of p2pool.

spiccioli



sr. member
Activity: 257
Merit: 250
It looks like miners have their rigs on rotate.
legendary
Activity: 1379
Merit: 1003
nec sine labore
Oh well,

I've left a call to arms...

If it really happens that a sizeable chunks of BitCentury customers points their miners here... well, dreaming costs nothing, doesn't it?

spiccioli

ps. A small pool like this finds it hard to have users joining it one after the other, because new users prefer to point their gears to a bigger one for reduced variance, but if five or ten TH/s (I don't know how many TH/s BitCentury sold) come here all at the same time, it should work, since everyone reduces the variance that the others feel.

legendary
Activity: 1379
Merit: 1003
nec sine labore
Don't know what block you are looking at.  Don't see that in the last few days of found blocks....

Redacted,

it's the last orpahan.

In march we had 6 orpahan though, so this is not the first time...

spiccioli
Pages:
Jump to: