Author

Topic: [4+ EH] Slush Pool (slushpool.com); Overt AsicBoost; World First Mining Pool - page 1126. (Read 4381933 times)

legendary
Activity: 1441
Merit: 1000
Live and enjoy experiments
I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
Well, slush is changing the rule since he is the ruler Smiley

However, I do think the new rule is fair, because if a block is found within seconds, those slow workers submitting their shares for previous block won't have chance anyway -- with or without a pool.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.

Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".

Please look @ http://nullvoid.org/bitcoin/statistix.php
Look at the 100 block duration. 

SEVERAL BLOCKS ARE FOUND IN LESS THAN 60 SECONDS.

You should change the system to support submitted answers for the last block and the current block.  That way anyone that submits an old work from the previous block doesn't get an invalid.

I understand the desire to cut off those abusing the system, which is a good thing, but you should make sure it doesn't affect those playing by the rules.
legendary
Activity: 1386
Merit: 1097
Today's pool update introduced small change in counting shares. Only submitted shares which are valid for current Bitcoin block are calculated. So if your miner ask for job and submit a share, it isn't calculated if new Bitcoin block arrived in meantime and your share cannot be a candidate for next Bitcoin block. This does not affect fairness of pool, when your miners are configured correctly. Please check your miners, if they does not use custom getwork timeout. Default period of getwork (typically 5 seconds) is the best settings. In this way, you should obtain 'stale' shares only with ~1% probability.

Please note, that also other miner settings can affect time between getwork() and share submit. For example, "-f 1" parameter in Diablo miner rised latency between getwork and submit significantly (from <5s to >10s for ATI 5970). I solved this by "-f 5".
legendary
Activity: 1386
Merit: 1097
Thanks for the explanation; is this latency an exploitable vulnerability though? just wondering.

I don't think so, but I'm not an expert for this.  You can ask anybody else in #bitcoin-dev, because this is not pool-related question.
legendary
Activity: 1441
Merit: 1000
Live and enjoy experiments
When two independent miners find a block with the same previous hash at the same time, only one of them can be valid bitcoin block. So pool announced new block a second after someone else...
Thanks for the explanation; is this latency an exploitable vulnerability though? just wondering.
legendary
Activity: 1386
Merit: 1097
I just restarted pool server application. All of you who changed worker passwords on website recently, please check if your workers are working correctly. I see some "Bad password" messages in server log. This is because I still didn't fixed reloading worker credentials from database to running application, so if you changed passwords before few days, NOW it was applied O:-).
sr. member
Activity: 322
Merit: 250
Do The Evolution
The pool is now over 30 GHashes/sec. Smiley
legendary
Activity: 1386
Merit: 1097
can anyone explain what possible scenarios are causing an invalid block being created? 

When two independent miners find a block with the same previous hash at the same time, only one of them can be valid bitcoin block. So pool announced new block a second after someone else...
legendary
Activity: 1441
Merit: 1000
Live and enjoy experiments
How often are blocks invalid? First time I've seen an invalid block.

AFAIK it is second invalid block in whole pool history.
can anyone explain what possible scenarios are causing an invalid block being created? 
legendary
Activity: 1386
Merit: 1097
How often are blocks invalid? First time I've seen an invalid block.

AFAIK it is second invalid block in whole pool history.
sr. member
Activity: 286
Merit: 250
587   2011-01-29 03:11:38   0:54:20   23619   0.99919554   105101    invalid

How often are blocks invalid? First time I've seen an invalid block.
legendary
Activity: 1386
Merit: 1097
In the case of a block with a transaction fee (eg. 10492) what happens to the fee? Does it get shared out along with the 50BTC generated?

As I was declared this many times before, currently pool keeps fees for self. Afaik, in whole pool history there are only ~0.05 BTC in fees. Once it will become any significant amount, I'll start calculating fees into participant rewards.

Btw calculating fees into rewards is difficult now, because pool does not see generated blocks with current JSON API (so I don't know how much fees is for which block). Hacking Bitcoin client for 0.05 BTC is quite worthless at this time. I hope next Bitcoin release will fix that, so adding fees into rewards will be much easier.
legendary
Activity: 1386
Merit: 1097
Please feel free to do whatever you like with the 0.00763751 BTC remaining in my mining pool account.
Na zdravi.

Díky Smiley
newbie
Activity: 4
Merit: 0
Maybe I am just lucky, but I feel like the 80 BTC I've earned is well behind the 200 BTC I would've earned all on my own.

Yes, this is just a luck. There is no reason for just big difference, you have only small statistical population to make conclusion.

Well, there is a reason, it's because my luck has made me an outlier in the positive direction, and correspondingly, there is (collectively) a worse-than-average performance for the rest of the network -- it's just statistics. I just find it interesting that being a part of such a large pool essentially pegs you to average performance. Joining the pool is like putting your resources into a bank that pays interest versus going-it-alone is more like playing the lotto with your resources. It just perceptually is unfortunate to have had really good luck in the pool.. maybe it would be better to not tell me how many blocks I found. Wink
member
Activity: 112
Merit: 11
In the case of a block with a transaction fee (eg. 10492) what happens to the fee? Does it get shared out along with the 50BTC generated?
legendary
Activity: 2940
Merit: 1330
I'm retiring from my career in bitcoin mining.

I've been working flat out for a few weeks now and have finally managed to mine a whole coin.

Thanks slush for making this possible - I think I'd have been waiting a long long time to get my first coin without your help.

Please feel free to do whatever you like with the 0.00763751 BTC remaining in my mining pool account.

Na zdravi.

Chris.
legendary
Activity: 1386
Merit: 1097
I'm afraid I don't have hash that was submitted many times, each one successful, but I'll note it if I see it happen again.  The exchanges where my client calculated the same hash twice and the server rejected it twice each looked like this:

Great, this is enough. From server logs:

Share found by xloem.t0, nonce 06b344ff
duplicate nonce 06b344ff
duplicate nonce 06b344ff

That means miner tried to upload block 3x, but only first share was accepted (which is OK). So it looks fine from pool side. Another question is why miner tried to upload one job many times.
newbie
Activity: 2
Merit: 0
I'm afraid I don't have hash that was submitted many times, each one successful, but I'll note it if I see it happen again.  The exchanges where my client calculated the same hash twice and the server rejected it twice each looked like this:
Code:
Sending to server: {"method":"getwork","params":["00000001ac490b07cf60a40113378138499a874c5f8041fadf537e150000bf3d00000000470ed189b06be2ae6299181c7dfdcc41b7681a65a7c1c028580fe5eb10ff03e44d41ade61b02fa2906b344ff000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000"],"id":1}
Server sent: {"result": false, "id": "1", "error": null}
legendary
Activity: 1386
Merit: 1097
I decided not to report this when I noticed it, because I was concerned somebody would find a way to exploit it, but I imagine it's better to get it fixed:

Hi, it's absolutely fine that you're reporting this stuff.

Quote
I used rpcminer-cuda, which reports the hashes it finds.  Within the past few days, one of my miners found the same hash a number of times in a row, and the server accepted it each time.
It seems the server is sometimes assigning duplicate work?

I'm pretty sure server isn't giving the same job twice. I don't know rpcminer internals, but recalculating/resubmitting of the same job might be possible when some error appear. For example, miner can resubmit the same hash when the first attempt fail on network level.

Quote
EDIT: This just happened again, but this time the duplicate hashes (including the first) were rejected by the server.  (result: false, error: null).  Is is this an error on my side or on the server's side?

Could you send me the specific hash, which was submitted many times? I'll search server logs...
newbie
Activity: 2
Merit: 0
Hi,

I decided not to report this when I noticed it, because I was concerned somebody would find a way to exploit it, but I imagine it's better to get it fixed:

I used rpcminer-cuda, which reports the hashes it finds.  Within the past few days, one of my miners found the same hash a number of times in a row, and the server accepted it each time.  It seems the server is sometimes assigning duplicate work?

EDIT: This just happened again, but this time the duplicate hashes (including the first) were rejected by the server.  (result: false, error: null).  Is is this an error on my side or on the server's side?
Jump to: