Pages:
Author

Topic: [500 GH/s]HHTT -Selected Diff/Stratum/PPLNS/Paid Stales/High Availability/Tor - page 20. (Read 56565 times)

legendary
Activity: 1379
Merit: 1003
nec sine labore
I have reason to believe it is exactly that.  Seems to be working well.

Given the big numbers these things throw up for now, I'm thinking I should add a line to the payout logic which pays out if more than 1 BTC is owed rather than waiting 24 hours.


I'd say, find a way for a user to set a payout level Smiley 

maybe 1address_difficulty_payoutLevel ?

spiccioli
sr. member
Activity: 392
Merit: 251
There is a 66 GH user now... 66... maybe it is an Avalon unit Smiley

It is using difficulty 128, though, which is rather low for such a big hashing power.

spiccioli

I have reason to believe it is exactly that.  Seems to be working well.

Given the big numbers these things throw up for now, I'm thinking I should add a line to the payout logic which pays out if more than 1 BTC is owed rather than waiting 24 hours.
legendary
Activity: 1379
Merit: 1003
nec sine labore
There is a 66 GH user now... 66... maybe it is an Avalon unit Smiley

It is using difficulty 128, though, which is rather low for such a big hashing power.

spiccioli
sr. member
Activity: 392
Merit: 251
Pool has been spotty for the last hour.  I was upgrading the database schema again.

Now SockThing (the stratum pool software) saves share data into AWS Simple Notification Service which in turns queues them in Simple Queue Service.  This way the share data can build up in SQS in the event of a database outage or slow down.

sr. member
Activity: 392
Merit: 251
So a share is considered "quite stale" if it is from a job at least two blocks behind.  This should make it very rare unless you miner is ignoring the clear=true of the new jobs.  I very much doubt that is happening.

I have code in to call shares from recent jobs "slightly stale" and still pay on them but I've never seen that happen so I'm guessing I have a bug in that code.


Found the bug, I had some logic inverted:

https://github.com/fireduck64/SockThing/commit/acc07a8ef3d407c45afd9570807516aac6a896ed

Stales should now show up as "slightly stale" and be paid correctly.
sr. member
Activity: 392
Merit: 251
So a share is considered "quite stale" if it is from a job at least two blocks behind.  This should make it very rare unless you miner is ignoring the clear=true of the new jobs.  I very much doubt that is happening.

I have code in to call shares from recent jobs "slightly stale" and still pay on them but I've never seen that happen so I'm guessing I have a bug in that code.
full member
Activity: 140
Merit: 100
Aren't stale share paid anymore on stratum?


stratum protocol does not support resuming (when the connection is lost) unless pool and miner both support the experimental mining.resume extension, so your 2 shares are lost... Sad
Lem
newbie
Activity: 78
Merit: 0
Aren't stale share paid anymore on stratum?

I have two issues. First:
Code:
2013-02-19 02:37:18	N		quite stale	999	0.00000000	
My cgminer says:
Code:
[2013-02-19 03:36:20] Stratum from pool 0 detected new block
 [2013-02-19 03:36:20] Rejected 0028e704 Diff 1.6K/999 GPU 0 pool 0


Second:
Code:
2013-02-19 03:58:39	N		quite stale	999	0.00000000	
2013-02-19 03:58:35 N quite stale 999 0.00000000
This time the situation is a bit different, with a possible stratum pool problem too. My cgminer says:
Code:
[2013-02-19 04:57:26] Switching to http://rpc.hhtt.1209k.com:3333
 [2013-02-19 04:57:40] JSON stratum auth failed: (null)
 [2013-02-19 04:57:40] Pool 0 http://rpc.hhtt.1209k.com:3333 not responding!
 [2013-02-19 04:57:40] Switching to http://pit.deepbit.net:8332
 [2013-02-19 04:58:51] Accepted c54857d1 Diff 1/1 GPU 1 pool 2
 [2013-02-19 04:58:54] Accepted 1485902d Diff 12/1 GPU 1 pool 2
 [2013-02-19 04:58:54] Accepted 49c168d7 Diff 3/1 GPU 1 pool 2
 [2013-02-19 04:58:57] Accepted 5f504435 Diff 2/1 GPU 0 pool 2
 [2013-02-19 04:59:04] Accepted e71eac09 Diff 1/1 GPU 1 pool 2
 [2013-02-19 04:59:06] Accepted 99bc9666 Diff 1/1 GPU 1 pool 2
 [2013-02-19 04:59:11] Pool 0 http://rpc.hhtt.1209k.com:3333 alive
 [2013-02-19 04:59:11] Switching to http://rpc.hhtt.1209k.com:3333
 [2013-02-19 04:59:27] [b]Lost 2 shares due to stratum disconnect on pool 0[/b]

Thanks.
Lem
newbie
Activity: 78
Merit: 0
sr. member
Activity: 438
Merit: 291
All working fine on stratum to me.
Crashed when you switched over but fine since then.
Below are stats for BFL single with 128 diff shares since 15th Feb when it crashed.

Pool: http://rpc.hhtt.1209k.com:8337
Has own long-poll support
 Queued work requests: 9925
 Share submissions: 499
 Accepted shares: 498
 Rejected shares: 1
 Accepted difficulty shares: 63744
 Rejected difficulty shares: 128
 Reject ratio: 0.2%
 Efficiency (accepted / queued): 5%
 Discarded work due to new blocks: 19817
 Stale submissions discarded due to new blocks: 1
 Unable to get work from server occasions: 9
 Submitting work remotely delay occasions: 1

Only issue is all the old stats before 30th Jan have gone. Not that I really care that much!
sr. member
Activity: 392
Merit: 251
As a cool new feature to give a little bonus to miners:

For any blocks found using the stratum protocol with HHTT, half of the transaction fees will be sent to the miner in the block coinbase transaction.  This bonus goes to the miner who actually found the block.


donator
Activity: 543
Merit: 500
sr. member
Activity: 392
Merit: 251
Yes, that is why they were rejected.  They should start with zeros and if they don't have enough zeros they are rejected as something your client should never have submitted.  Now, this shouldn't happen.  For this to happen there needs to be some misunderstanding between the client and the server about what data you should be hashing.  I'm not sure what caused that.
Anything I can do to help you debugging? I'm back on getwork for now, but can switch to stratum again if it helps.

That, it would be great to have you on stratum.  I'd like to see if it reoccurs.  If it is what I suspect it might have been it should be fixed now.
donator
Activity: 543
Merit: 500
Yes, that is why they were rejected.  They should start with zeros and if they don't have enough zeros they are rejected as something your client should never have submitted.  Now, this shouldn't happen.  For this to happen there needs to be some misunderstanding between the client and the server about what data you should be hashing.  I'm not sure what caused that.
Anything I can do to help you debugging? I'm back on getwork for now, but can switch to stratum again if it helps.
Lem
newbie
Activity: 78
Merit: 0
Don't feel too bad.  Your winning hash wouldn't have been valid anywhere but that pool.

Of course.

Quote
If you had been solo, you may not have found a single block yet Smiley.

Almost sure I wouldn't. Luck is blind, but misfortune sees so well... which is just another way to spell the notorious Murphy's law: If something can go wrong, it will. Wink))

legendary
Activity: 1750
Merit: 1007
Damn, block! Smiley

This is my second block on HHTT, which is my favourite backup pool from hopping. I'm minimg with two 7970, one of which is now on half-service. Usually 1,35GH/s, now 1.05GH/s.

In less than 10 month of mining (with a pause since middle of december to late January, during which I've just been hopping and chilling out), I've found a total of five blocks (I know of five, at least). Two with deepbit, two with HHTT, one with slush. It would have been 225BTC on solo mining. Moreover, and much to my dismay, at current prices it would have been about 4500€, versus about 1400€ I've made so far with pool mining. Damn! Sad

Don't feel too bad.  Your winning hash wouldn't have been valid anywhere but that pool.  If you had been solo, you may not have found a single block yet Smiley.
Lem
newbie
Activity: 78
Merit: 0
Damn, block! Smiley

This is my second block on HHTT, which is my favourite backup pool from hopping. I'm minimg with two 7970, one of which is now on half-service. Usually 1,35GH/s, now 1.05GH/s.

In less than 10 month of mining (with a pause since middle of december to late January, during which I've just been hopping and chilling out), I've found a total of five blocks (I know of five, at least). Two with deepbit, two with HHTT, one with slush. It would have been 225BTC on solo mining. Moreover, and much to my dismay, at current prices it would have been about 4500€, versus about 1400€ I've made so far with pool mining. Damn! Sad
sr. member
Activity: 392
Merit: 251
In case anyone was paying attention, HHTT just had a 30 minute outage as a database table change took longer than I thought it would.

I'm going to change the stratum server to save shares via SQS rather than a database so that the pool can stay up for an outage like this.  In that case, it would just delay processing and payments for shares but not block solving and all the shares would eventually be counted correctly.

Sorry for the trouble.
sr. member
Activity: 392
Merit: 251
Is it possible that your client is reconnecting and trying to submit shares for jobs it learned about on a previous connection?
I'm using latest cgminer. If cgminer does that, then: yes.

But the hashes actually look very strange, don't they? Shouldn't the first 4 bytes always be zero?

Yes, that is why they were rejected.  They should start with zeros and if they don't have enough zeros they are rejected as something your client should never have submitted.  Now, this shouldn't happen.  For this to happen there needs to be some misunderstanding between the client and the server about what data you should be hashing.  I'm not sure what caused that.

donator
Activity: 543
Merit: 500
Is it possible that your client is reconnecting and trying to submit shares for jobs it learned about on a previous connection?
I'm using latest cgminer. If cgminer does that, then: yes.

But the hashes actually look very strange, don't they? Shouldn't the first 4 bytes always be zero?
Pages:
Jump to: