Pages:
Author

Topic: Mining inefficiency due to discarded work (Read 12335 times)

member
Activity: 84
Merit: 11
February 14, 2011, 10:34:59 AM
#51
With an askrate of 5 seconds, you likelihood to find a share in a getwork within 5 seconds on slower miners is so incredibly low that you're basically nullifying their chances of doing so.

The chance of finding a share is still exactly the same whether you spend 50 seconds on 1 getwork or 5 seconds each on 10 getworks in a row. 

I don't know about you, but if I were the one running a pool, having my bandwidth and server resources reduced by a significant amount would make me happy.

I do run a pool, and I don't pay for stale shares since they are of no help to me in solving a block.  Slush does not pay for them for the same reason.

If your goal is to win, you should be playing today's lottery, not last week's.

If this still doesn't make sense, please re-read the technical specifications of bitcoin block generation and summarize them in a post, then I could correct any misunderstandings you may have about the process.
sr. member
Activity: 258
Merit: 250
February 14, 2011, 06:53:08 AM
#50
That is really only a concern if I'm looking to solve "the block" with an answer. I realize that the likelihood of me solving the block is extremely low. My concern lies in finding a "share" in a pool environment similar to slush's pool.

With an askrate of 5 seconds, you likelihood to find a share in a getwork within 5 seconds on slower miners is so incredibly low that you're basically nullifying their chances of doing so.

Both of my previous examples cited GPUs. One at a fairly decent speed, and one much slower. That didn't even take into consideration CPU mining, where you may only be getting ~5M hash/s. It would take 859 seconds to process through a single getwork, and 5 seconds would be only 0.58% of that getwork.

This causes significantly high traffic on the server due to repetitive and constant getwork requests that yield little to no gains for the pool. Even 10 seconds would be low for a lot of miners.

If you adjusted the ask rate based on the speed of the card, and assumed that you could adjust anything slower than ~86M hash/s (50 seconds for 2^32 getwork) to a reasonable speed (around 20 - 25s perhaps) then you could significantly reduce traffic to the server, still yield roughly the same number of shares submitted over the same period of time, and would likely allow slower clients to discover results they normally would have missed or never gotten to (like, anything in the other 99.42% of the getwork).

Likewise, working off of the logic that it's all a matter of luck on whether or not those shares may solve the block, you would have the exact same chance of solving the block.

I don't know about you, but if I were the one running a pool, having my bandwidth and server resources reduced by a significant amount would make me happy.
member
Activity: 84
Merit: 11
February 14, 2011, 01:04:06 AM
#49
If I run with an askrate of 5 seconds on an HD4850, that yields an average of 55M hash/s, it will take me roughly 78 seconds to iterate an entire 2^32 nonce, therefore I'm looking at 6.41% of the getwork and assuming I will find an answer, and that it will likely be within that 5 second span of time, each time. Correct? Theoretically thats what is being said, right?

I can see that working for a 6950 getting ~350M hash/s (13-14 seconds for a full 2^32) because I'd still be covering 40% of that getwork, but for slower cards, or CPU miners, it seems ridiculous to assume you would find an answer in the first 5, 10 or even 20 seconds.

m0mchills miner, as the code appears on the git hub, is forcing the askrate to be between 1 and 10 seconds. i.e. if I set my askrate to 20 seconds, the miner will automatically set it to 10.

It's inefficient to do so. Faster cards can iterate a getwork quick enough for 5 (or 10) seconds to be a worthwhile, slower cards cannot.

Lets assume for a moment that I have 2 cards. One running at ~55M hash/s and one at 350M hash/s. I process 1000 getworks with each, at a 5 second ask rate.

350M hash/s card:
2^32 / 350,000,000 = 12.7 s
39.7% of each getwork completed in 5 seconds.
Essentially, 397 of 1000 getworks were searched entirely.

55M hash/s card:
2^32 / 55,000,000 = 78.1 s
6.41% of each getwork completed in 5 seconds.
Essentially, 64 of 1000 getworks were searched entirely.

The statistical probability of an answer being found in the first 6.41% of a getwork is so low that you may as well not bother mining with an askrate that low, or switch to a CPU miner.

You are assuming that you are able iterate through some significant percentage of the entire available search space, which you cannot, even if you mined for the rest of your life on all of the hardware of the earth combined.

There is a set chance of finding a winning block on any single hash that you do, like playing the lottery.  It does not matter in what order you choose the tickets, out of a bucket of a near-infinite pile of tickets, one out of every X of which happens to be a winner.  Your chance of picking a winner is always the same on every draw.

One 32-bit getwork is equivalent to .0000000000000000000000000000000000000000000000000000000000000000037% of the entire search space.
sr. member
Activity: 258
Merit: 250
February 13, 2011, 08:58:15 AM
#48
If I run with an askrate of 5 seconds on an HD4850, that yields an average of 55M hash/s, it will take me roughly 78 seconds to iterate an entire 2^32 nonce, therefore I'm looking at 6.41% of the getwork and assuming I will find an answer, and that it will likely be within that 5 second span of time, each time. Correct? Theoretically thats what is being said, right?

I can see that working for a 6950 getting ~350M hash/s (13-14 seconds for a full 2^32) because I'd still be covering 40% of that getwork, but for slower cards, or CPU miners, it seems ridiculous to assume you would find an answer in the first 5, 10 or even 20 seconds.

m0mchills miner, as the code appears on the git hub, is forcing the askrate to be between 1 and 10 seconds. i.e. if I set my askrate to 20 seconds, the miner will automatically set it to 10.

It's inefficient to do so. Faster cards can iterate a getwork quick enough for 5 (or 10) seconds to be a worthwhile, slower cards cannot.

Lets assume for a moment that I have 2 cards. One running at ~55M hash/s and one at 350M hash/s. I process 1000 getworks with each, at a 5 second ask rate.

350M hash/s card:
2^32 / 350,000,000 = 12.7 s
39.7% of each getwork completed in 5 seconds.
Essentially, 397 of 1000 getworks were searched entirely.

55M hash/s card:
2^32 / 55,000,000 = 78.1 s
6.41% of each getwork completed in 5 seconds.
Essentially, 64 of 1000 getworks were searched entirely.

The statistical probability of an answer being found in the first 6.41% of a getwork is so low that you may as well not bother mining with an askrate that low, or switch to a CPU miner.
member
Activity: 84
Merit: 11
February 09, 2011, 06:15:48 AM
#47
That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.

If I read this right you're confirming it's more potentially efficient to skip stale getworks @ 10s rather than wait longer to run through the whole getwork request right?

Correct, you lose 1.67% efficiency with a 10s getwork interval, you might want to lower it to 5s.
sr. member
Activity: 302
Merit: 250
February 07, 2011, 04:37:15 PM
#46
That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.

If I read this right you're confirming it's more potentially efficient to skip stale getworks @ 10s rather than wait longer to run through the whole getwork request right?
member
Activity: 84
Merit: 11
February 01, 2011, 04:20:25 AM
#45
That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??

Yep, you are losing a negligible percentage of valid answers by skipping the rest of a getwork, but you would invalidate an entire 5% of any correct answers which you find by waiting 30 seconds between getworks.

In other words, you reduce your chances of solving a block by 1/600 for every second of working on a potentially stale getwork.

You'd have to skip 4.49 x 10^64 full getworks in order to skip 1/600th of all valid solutions.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
February 01, 2011, 03:58:08 AM
#44

And no, title of this topic should not be 'how is poclbm skipping blocks', because it isn't true.

You misquoted me.
I never said "how is poclbm skipping blocks". 
What I did say was "How Python OpenCL (poclbm) is mining inefficiently".
Can you read/see the difference?  If you're going to quote someone, get it right.

poclbm has found 10 blocks for your pool on my machines. 
It's the number of answer's per getwork that I've been questioning, NOT BLOCKS.
legendary
Activity: 1386
Merit: 1097
February 01, 2011, 03:51:13 AM
#43
only 1 answer was the correct answer to find the block, and that the getwork:solved ratio was always 1:1.

Man, everybody is talking you that finding a share from work is random. There is nothing as 'exactly one share hidden in every getwork'. There is 1 share in every getwork in AVERAGE, because difficulty 1 means there is one solution per every 2^32 attempts. And 2^32 is the size of nonce, which miners iterate. Nothing more.

And no, title of this topic should not be 'how is poclbm skipping blocks', because it isn't true.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
February 01, 2011, 03:16:44 AM
#42
Only 1 answer get's the block.  So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?

I'm saying that there are 1.224 × 10^63 answers which get the block, so skipping one of them is inconsequential.

The answer which counts is simply the first one of those 1.224 × 10^63 valid answers which is found, but it is by no means the only answer.

We simply stop looking after we find it.  If we had wanted to, we could hash the same block for a month and find about 4,380 valid answers for it in that time.

OK, my mind just fucking flipped.  Nobody but you has said their are 1.224 × 10^63 answers to a block.  I've been under the impression (due to imformation provided by others) that only 1 answer was the correct answer to find the block, and that the getwork:solved ratio was always 1:1.
We've been talking about multiple solutions to a getwork....but that now makes a bit more sense as to why I'm seeing more than 1 answer in a getwork.

So ANY one of the possible 1.224 × 10^63 answers would catch me 50 bitcoins?


SO....

1.1579208923731619542357098500869e+77 (2^256) / 1.224e+63 (1.224x(10^63)) = 94601380095846.564888538386445008

94601380095846 / 30000000000 (pool speed) = 3153.3793365282 (seconds)
3153.3793365282 (seconds) / 60 (seconds) = 52.55632227547 (minutes)

That sounds about right.  You're the first one to explain this in that level of detail.  Thank you for clarifying.  Sure wish someone else had said it like that.
So if their are 1.224e+63 answers, the miner just says "10 seconds have passed so fuck it, move to the next getwork and hope we find something."  Something like that??
member
Activity: 84
Merit: 11
February 01, 2011, 03:05:45 AM
#41
Only 1 answer get's the block.  So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?

I'm saying that there are 1.224 × 10^63 answers which get the block, so skipping one of them is inconsequential.

The answer which counts is simply the first one of those 1.224 × 10^63 valid answers which is found, but it is by no means the only answer.

We simply stop looking after we find it.  If we had wanted to, we could hash the same block for a month and find about 4,380 valid answers for it in that time.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
February 01, 2011, 03:00:34 AM
#40
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.

The entire search-space for a single block is 2^256 hashes.
There are 2^224 answers per block when difficulty is 1.
At the present difficulty of 22012.4941572, there are 2^224 / 22012.4941572 valid answers.
That's 1.224 × 10^63 valid answers.
Out of 1.157 × 10^77 total answers.
The pool can process about 30,000,000,000 hashes per second.
That's 3.0 × 10^10 hashes per second.
It would take 3.859e+66 seconds for the pool to search through every possible hash in a single block.
That's 6.432 × 10^64 minutes.
That's 1.072 × 10^63 hours.
That's 4.467 × 10^61 days.
That's 1.223 × 10^59 years.
That's 8.901 × 10^48 times the age of the universe.

You can skip an answer by going on to the next work.
This will leave you with a possible 1.224 × 10^63 - 1 valid answers.
That's 1.224 × 10^39 times more valid answers than there are stars in the universe.

We are searching through a possible 115 quattuorvigintillion, 792 trevigintillion, 89 duovigintillion, 237 unvigintillion, 316 vigintillion, 195 novemdecillion, 423 octodecillion, 570 septendecillion, 985 sexdecillion, 8 quindecillion, 687 quattuordecillion, 907 tredecillion, 853 duodecillion, 269 undecillion, 984 decillion, 665 nonillion, 640 octillion, 564 septillion, 39 sextillion, 457 quintillion, 584 quadrillion, 7 trillion, 913 billion, 129 million, 639 thousand and 936 hashes for every block.
Trying to find, at the current difficulty, one of 1 vigintillion, 224 novemdecillion, 756 octodecillion, 562 septendecillion, 96 sexdecillion, 912 quindecillion, 245 quattuordecillion, 974 tredecillion, 145 duodecillion, 865 undecillion, 520 decillion, 4 nonillion, 272 octillion, 488 septillion, 786 sextillion, 20 quintillion, 128 quadrillion, 241 trillion, 774 billion, 877 million, 474 thousand and 816 valid answers.



Only 1 answer get's the block.  So when you say "You can skip an answer by going on to the next work", are you saying that the likely hood of that skipped getwork/answer being *the answer* is sooooooooo small that it's just not worth it to find it, and we should just move on to the next getwork?

member
Activity: 84
Merit: 11
February 01, 2011, 02:53:50 AM
#39
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.

The entire search-space for a single block is 2^256 hashes.
There are 2^224 answers per block when difficulty is 1.
At the present difficulty of 22012.4941572, there are 2^224 / 22012.4941572 valid answers.
That's 1.224 × 10^63 valid answers.
Out of 1.157 × 10^77 total answers.
The pool can process about 30,000,000,000 hashes per second.
That's 3.0 × 10^10 hashes per second.
It would take 3.859e+66 seconds for the pool to search through every possible hash in a single block.
That's 6.432 × 10^64 minutes.
That's 1.072 × 10^63 hours.
That's 4.467 × 10^61 days.
That's 1.223 × 10^59 years.
That's 8.901 × 10^48 times the age of the universe.

We are searching through a possible 115 quattuorvigintillion, 792 trevigintillion, 89 duovigintillion, 237 unvigintillion, 316 vigintillion, 195 novemdecillion, 423 octodecillion, 570 septendecillion, 985 sexdecillion, 8 quindecillion, 687 quattuordecillion, 907 tredecillion, 853 duodecillion, 269 undecillion, 984 decillion, 665 nonillion, 640 octillion, 564 septillion, 39 sextillion, 457 quintillion, 584 quadrillion, 7 trillion, 913 billion, 129 million, 639 thousand and 936 hashes for every block.

Trying to find, at the current difficulty, one of 1 vigintillion, 224 novemdecillion, 756 octodecillion, 562 septendecillion, 96 sexdecillion, 912 quindecillion, 245 quattuordecillion, 974 tredecillion, 145 duodecillion, 865 undecillion, 520 decillion, 4 nonillion, 272 octillion, 488 septillion, 786 sextillion, 20 quintillion, 128 quadrillion, 241 trillion, 774 billion, 877 million, 474 thousand and 816 valid answers.

You can skip a valid answer by skipping an entire getwork 22012 times.
This will leave you with a possible 1.224 × 10^63 - 1 valid answers.
That's 1.224 × 10^39 times more valid answers than there are stars in the universe.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
February 01, 2011, 01:43:22 AM
#38
This is the key point right here:
every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.

Due to the way hashes are essentially random, it doesn't matter if you switch before completing work.

Yes it does.  It you don't complete your work, you *might* be skipping the correct answer/hash/nonce for the block.
If every getwork is only issued once, and the nonce/answer is skipped because your miner quit looking after finding only 1 answer (when more than 1 is possible), then YOU COULD HAVE SKIPPED THE ANSWER FOR THE BLOCK.  
sr. member
Activity: 258
Merit: 250
February 01, 2011, 01:40:46 AM
#37
Not every getwork is equal. Not every hash is equal. It's not random. There is a single answer per block. If that answer is never provided because you skipped it, it doesn't fucking matter how many wrong answers you provide to try and make up for it. They will still be wrong.
legendary
Activity: 3878
Merit: 1193
February 01, 2011, 01:32:35 AM
#36
This is the key point right here:
every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.

Due to the way hashes are essentially random, it doesn't matter if you switch before completing work.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
February 01, 2011, 01:31:01 AM
#35
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK?

Perhaps "How Python OpenCL (poclbm) is mining inefficiently"
sr. member
Activity: 258
Merit: 250
February 01, 2011, 01:22:59 AM
#34
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK?

Again, I kindly ask to open another thread for discussions not related to poclbm.

Aside from the few times that direct questions were posed to Slush, all aspects of this conversation have been directly related to, or in connection to, aspects of the functionality of poclbm. Likewise, the topic is actually pretty much as far off as you can be.

We're not discussing duplicate work. At all. We're discussing how POCLBM is skipping work.

It would be better suited to name the topic, "The inefficiencies of poclbm".

Quote
With bitcoin you have many much 'boxes' than 'tickets'. To be exact, the boxes and small prize tickets (pool shares) are 2^224. Grand prize tickets are currently ~ 2^209. Only one in 2^15 boxes contains a grand prize ticket. Deciding to begin another box is a probabilistic win - see the Monty Hall problem. OK, not a win, but every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.

The Monty Hall problem is not exactly what is going on here though. You're saying that the probability of moving on to the next getwork is more likely the case because you would statistically get a higher percent chance of solving the block by choosing a different answer, but the core basis of the argument is not true.

You can't really say, "I have a higher chance in winning if I change my decision" if you're not looking at all the choices. To use the Monty Hall problem as an example, this would be if you were presented with 3 doors, you chose the first door and before the host could open a different door (thus changing your odds in favor of switching), you were presented with a completely new set of doors. Rinse and repeat over and over.

Yes, you may get "lucky" and pick the correct door the first time, but in this instance, you have the option of opening all three doors, taking whatever prizes they may contain, and then moving on to the next set.

Using that as a basis, I could just as easily say ((Total # of 2^32 Keyspaces) / (Collective Pool Hashrate)) / (Number of Active Workers) = N and then hash an entire block myself, only process every Nth nonce and be just as effective as the collective pool is, skipping to the next getwork on each found hash.
administrator
Activity: 5222
Merit: 13032
February 01, 2011, 01:14:48 AM
#33
I split this topic from python OpenCL bitcoin miner because, as m0mchil pointed out, the posts were largely unrelated to poclbm. Is this title OK?
full member
Activity: 171
Merit: 127
February 01, 2011, 12:44:29 AM
#32
May I kindly ask to move this discussion to another/separate thread, please?

I had private discussion with geebus already. Will try to explain one more time here.

Quote
So you have 4 winning tickets, and these tickets would make you eligible to win the grand prize of 50 bitcoins, but only 1 of the 4 tickets is the grand prize winning ticket.
If I choose to quit looking in box 1 after finding just 1 ticket, and do the same for box two, you only find half the tickets.  The grand prize winner ticket might have been left behind in the boxes, but I equally might get lucky and win the grand prize.

I don't know about you, but I'd like to find all 4 tickets, not half of them.

With bitcoin you have many much 'boxes' than 'tickets'. To be exact, the boxes and small prize tickets (pool shares) are 2^224. Grand prize tickets are currently ~ 2^209. Only one in 2^15 boxes contains a grand prize ticket. Deciding to begin another box is a probabilistic win - see the Monty Hall problem. OK, not a win, but every box is equal - checking 100 tickets from a new box completely compensates not checking 100 tickets from the previous box.

@FairUser - poclbm makes some assumptions which are counter intuitive at first look. Because it pulls jobs, there is assumption that job should live at most N seconds because otherwise you risk solving already solved block. The probability for this is roughly N/600, but practically always worse because network is growing.

Because there is no single GPU capable of exhausting the 2^32 nonces in 5 (and even in 10) seconds, poclbm does not check for nonce overflow.

Again, I kindly ask to open another thread for discussions not related to poclbm.
Pages:
Jump to: