Pages:
Author

Topic: Mining inefficiency due to discarded work - page 2. (Read 12335 times)

sr. member
Activity: 258
Merit: 250
February 01, 2011, 12:16:50 AM
#31
There are an unlimited number of unique getwork responses (nearly).

No, there are a fixed number. If every getwork request is unique, and each are giving a portion of the total block, as a 2^32 keyspace, it's a large number (4,294,967,296 to be exact), but not "unlimited".

At a speed of 24 billion hashes/s (roughly the speed of the pool, currently), it would take ~45 minutes to iterate through an entire block assuming the answer was not found until the last share of the last getwork.

In either instance, over the same 45 minute period of time, each miner working in the pool would see roughly the same number of shares contributed using either miner, so your end payout per round is not effected. The frequency at which rounds are solved, by actually submitting all of the possible answers instead of ignoring a large percentage of them, stands to be improved by iterating through each getwork in it's entirety instead of a hit & run method.
administrator
Activity: 5222
Merit: 13032
February 01, 2011, 12:05:16 AM
#30
This assumes there is only one answer per getwork. In some instances there are none, whereas in other instances there are multiple (for the sake of explanation we'll say there could be 5).

The boxes are getwork responses. I said:
Quote
a single box may contain zero, one, or more winning tickets

Quote from: geebus
Also, you also have a fixed number of boxes, as there are only a fixed number of 2^32 chunks of a whole block, so I'll ignore your "endless number" logic.

There are an unlimited number of unique getwork responses (nearly).
sr. member
Activity: 258
Merit: 250
January 31, 2011, 11:53:59 PM
#29
My metaphor was flawed due to my attempt at simplicity, so I will expand it:

There are an endless number of boxes. Each contains 100 tickets. The entire endless set of boxes as a whole has 1 winning ticket for every 99 non-winning tickets, though a single box may contain zero, one, or more winning tickets. The chance of drawing a winning ticket from any box is therefore 1 in 100. It does not matter whether you draw continuously from one box or draw only one ticket from each box: the odds are still 1 in 100.

Completing an entire work is like emptying each box in order. Getting new work after finding something is like moving to the next box after finding a winning ticket. You could also move to the next box every few minutes. There is no concept of efficiency, however, as the chance is always 1 in 100.

Likewise, there are an endless number of works. The entire set as a whole has an exact chance per hash. It doesn't matter how many hashes you do per work: the chance is always the pre-set amount.

This assumes there is only one answer per getwork. In some instances there are none, whereas in other instances there are multiple (for the sake of explanation we'll say there could be 5).

Also, you also have a fixed number of boxes, as there are only a fixed number of 2^32 chunks of a whole block, so I'll ignore your "endless number" logic.

So, lets talk say our "fixed number" is 100 boxes here, and we'll assume that there are a total of 100 winning tickets (shares) and of those 100 tickets, only 1 is the grand prize (the block). We'll also assume that 50 of the boxes are empty.

You take 1 ticket from each box and you have a total of 50 winning tickets, with each of those winning tickets having a 0.5% chance that it will win the grand prize, and a 50% chance that you never drew the grand prize ticket.

- OR -

You could grab EVERY winning ticket from each one of the boxes, and have a 1% chance that each winning ticket could be the grand prize winner, and a 100% chance that at least one of your tickets is going to be the grand prize winner.

Note: This also doesn't take into consideration the fact that one of the first boxes you drew tickets from could contain the winning ticket, but the winning ticket is not the first one drawn. In those instances, one of the ignored tickets (second, or later, ticket drawn from the box) could have been the grand prize winner. Using FairUser's fork of m0mchill's miner, the pool would get the block. Using m0mchill's miner, the pool would not.
administrator
Activity: 5222
Merit: 13032
January 31, 2011, 11:42:53 PM
#28
My metaphor was flawed due to my attempt at simplicity, so I will expand it:

There are an endless number of boxes. Each contains 100 tickets. The entire endless set of boxes as a whole has 1 winning ticket for every 99 non-winning tickets, though a single box may contain zero, one, or more winning tickets. The chance of drawing a winning ticket from any box is therefore 1 in 100. It does not matter whether you draw continuously from one box or draw only one ticket from each box: the odds are still 1 in 100.

Completing an entire work is like emptying each box in order. Getting new work after finding something is like moving to the next box after finding a winning ticket. You could also move to the next box every few minutes. There is no concept of efficiency, however, as the chance is always 1 in 100.

Likewise, there are an endless number of works. The entire set as a whole has an exact chance per hash. It doesn't matter how many hashes you do per work: the chance is always the pre-set amount.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 11:17:41 PM
#27
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.

It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.

So you have 4 winning tickets, and these tickets would make you eligible to win the grand prize of 50 bitcoins, but only 1 of the 4 tickets is the grand prize winning ticket.
If I choose to quit looking in box 1 after finding just 1 ticket, and do the same for box two, you only find half the tickets.  The grand prize winner ticket might have been left behind in the boxes, but I equally might get lucky and win the grand prize.

I don't know about you, but I'd like to find all 4 tickets, not half of them.
sr. member
Activity: 258
Merit: 250
January 31, 2011, 11:14:29 PM
#26
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.

It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.

If it takes you the same amount of time to draw 15 tickets from 1 box as it does to draw 1 ticket each from 15 boxes, you still have the same 15 tickets, but your chances of having a winning ticket from 1 box are higher if you hold 15 tickets from that box.

Lets say there are 100 tickets in a box, and 2 are winners. You have a 2% chance that the ticket you draw from that box will be the winning ticket.

If you draw 15 tickets from that box, you have a 15% chance.

Now, if you have 15 boxes, each with 100 tickets, and 2 winners, and you draw 1 ticket from each, you still only have a 2% chance that it will be the winner.

If it takes you the same amount of time to draw 15 tickets from 1 box as it does to draw 1 ticket each from 15 boxes, would you rather have a 2% chance, or a 15% chance?
administrator
Activity: 5222
Merit: 13032
January 31, 2011, 10:57:14 PM
#25
SHA-256 returns a random number that is impossible to know before actually doing the work. Since the number returned is random, doing the hashes for one work gives you the exact same chance of solving a block as doing the hashes for another work.

It's like having two boxes full of raffle tickets. Each contains 2 winning tickets. If you find a winning ticket in one box, it doesn't help you (or hurt you) to continue drawing from that box. Nor is it more "efficient" in any way.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 10:38:29 PM
#24

I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested.

I would love do some stats on a DB dump. PM me the link (or post it). Thank you Smiley

Btw I think we're slightly offtopic here.

Only slighty.
legendary
Activity: 1386
Merit: 1097
January 31, 2011, 10:22:08 PM
#23
Already working on a mod to check my local bitcoind between "work" (32 of them) in the python code for the current block. 

Yes, this will work.

Quote
More blocks == more pay for everyone

Irrelevant in this discussion. You are skipping some nonces, but you are crunching another nonces instead. No block lost.

Quote
In your server stats, I want you to list:
1) The number of get requests for the CURRENT round

As pool hashrate is +- constant in one round, you can keep getwork/s * round time to get this.

Quote
2) The number of submitted hashes (both ACCEPTED and INVALID/STALE listed separately) for the CURRENT round.

I don't calculate it now, because I simplified the code with the last update, but I have those number for ~5 millions of shares. Stale blocks were something around 2 %.

Quote
If you wanted to increase the accuracy of this, separate the INVALID/STALE hashes based on the reason they were rejected, ie (WRONG BLOCK) or (INVALID/ALREADY SUBMITTED).
Then take (# of getwork this round)/(# of accepted/invalid(already submitted))*100 and publish that number in real time.
That's how you check the efficiency of the pool's ability to search all hashes for each getwork sent out. 
This will show if you are really get that 1:1 ratio of getwork/solved hashes.

I have those numbers, but I'm not interested to make fancy GUI to provide this. I can publish database dump if you're interested.

I'm not interested because pool does not earn bitcoins on getwork/share efficiency. Going to 1:1 ratio is mostly irrelevant, it's only game with numbers. I think you still don't understand this Smiley. What to do with slow CPU miners, which crunch whole space for 2-3 minutes? They should crunch whole nonce just to have fancy 1:1 ratio on pool page?

Of course effectivity of network transfers is nice. You can buy stronger GPU and your getwork/submit efficiency will be higher. But this is not a point. The point is to consistently crunch valid blocks. Thats all.

Btw I think we're slightly offtopic here.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 10:12:15 PM
#22
Say I get 50 million hashes a second on my GPU.
2^32 / 50,000,000 = 86 seconds to process an entire keyspace.
If my askrate is set to 5 seconds, I'm only checking 5.82% of each keyspace before moving on and assuming the getwork holds no answers.

You're potentially ignoring 94.18% of answers. Numbers obviously vary based on the speed of the GPU, but for a 5 second askrate to be effective, you would need 859 Million Hashes/s to process a keyspace of a single getwork, and even then the way m0mchills code is written, once it finds the first answer, it moves on to the next getwork anyway. This is flawed.


Exactly.  The slower the GPU, and the lower the askrate the worse off your efficiency will be because more possible hashes are being ignored.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 10:10:31 PM
#21
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.

You can call it whatever, but with long getwork period, you are hashing shits for many % of time :-).

No, I get the same number of accepted as I do with the normal miner. Smiley

Quote
Quote
You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Well, getwork is not a puzzle. It is random walk, when you hit valid share time to time. Nonces are just numbers. It's irrelevant if you are trying to hash 0xaaaa or 0xffff. The probability that you hit valid share is still the same.

But if I get to 0xcccc, find an answer and stop looking, I *could be missing* more answers.

Quote
Quote
1) Not ignoring nonces of the getwork when a hash is found

Well, this is the only point which make sense. Diablo already implemented this and if it isn't in m0mchil's, it would be nice to implement it, too. But it's definitely on m0mchil decision, not on us.

That's why I posted in his thread.

Quote
Also sorry for some impatient responses, but I'm responding those questions to pool users almost every day and it become little boring Wink. It isn't anything personal. To be honest, it isn't long time ago when I had very similar questions as you have right now. But thanks to m0mchil, Diablo and few other people on IRC, now I know how much wrong I was Wink.

Maybe I'm totally wrong in thinking that ignored POSSIBLE answers COULD BE *THE* ANSWER for the block....since I've already found 10 blocks for the pool. Wink
If Diablo Miner does look through the entire 2^32 possible answers, then it is being 100% efficient.  I'd like to see the same with m0mchill's miner, so I made the changes I wanted to see and it bothered when I realized it was ignoring possible answers.
sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 10:06:29 PM
#20
If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

Maybe I'm wrong, but you are not paid for higher getwork/submit efficiency, but for finding valid blocks. So you are optimizing wrong thing Wink. Maybe you can get 100% getwork/submit ratio, but you are crunching old jobs. But it is your choice and your hashing power.


Yes, I will be crunching old jobs for about 30 seconds.  Already working on a mod to check my local bitcoind between "work" (32 of them) in the python code for the current block.  You and I both know what our bitcoind's can do. Wink  This way we can stop within 1 second on a block update and get a new getwork.  So quit thinking in terms of OLD JOBS.  Whether it's old or new, I'm talking about the ability to search the 2^32 hashes.

Your server just happens to be the only server that is running a public pool, hence, it might feel like I'm picking on it...but I'm not.  All these changes helping increase the probability of the pool as a whole to finding the block in the getwork instead of ignoring most of the getwork when just a single answer is found.  Maybe Diablo is doing it different (i hate java personally so I haven't even looked at the code), m0mchill's is ignoring part of the 2^32 POSSIBLE answers after finding just 1.

More blocks == more pay for everyone

OK Slush, do this. 

In your server stats, I want you to list:
1) The number of get requests for the CURRENT round
2) The number of submitted hashes (both ACCEPTED and INVALID/STALE listed separately) for the CURRENT round.

If you wanted to increase the accuracy of this, separate the INVALID/STALE hashes based on the reason they were rejected, ie (WRONG BLOCK) or (INVALID/ALREADY SUBMITTED).

Then take (# of getwork this round)/(# of accepted/invalid(already submitted))*100 and publish that number in real time.
That's how you check the efficiency of the pool's ability to search all hashes for each getwork sent out. 

This will show if you are really get that 1:1 ratio of getwork/solved hashes.

Can you do that?  Publish those numbers on the server stats page? 
Then we all can see what the efficiency of getwork/solved hashes is.

Also, you can't make up for the inefficiency of quiting the search after 1 submitted answer, or askrate tigger, with increasing the speed at which you get work.
That only increases the speed of your inefficiency, it doesn't solve the problem of not looking for more answers.

legendary
Activity: 1386
Merit: 1097
January 31, 2011, 09:55:28 PM
#19
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.

You can call it whatever, but with long getwork period, you are hashing shits for many % of time :-).

Quote
You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Well, getwork is not a puzzle. It is random walk, when you hit valid share time to time. Nonces are just numbers. It's irrelevant if you are trying to hash 0xaaaa or 0xffff. The probability that you hit valid share is still the same.

Quote
1) Not ignoring nonces of the getwork when a hash is found

Well, this is the only point which make sense. Diablo already implemented this and if it isn't in m0mchil's, it would be nice to implement it, too. But it's definitely on m0mchil decision, not on us.

Also sorry for some impatient responses, but I'm responding those questions to pool users almost every day and it become little boring Wink. It isn't anything personal. To be honest, it isn't long time ago when I had very similar questions as you have right now. But thanks to m0mchil, Diablo and few other people on IRC, now I know how much wrong I was Wink.
sr. member
Activity: 258
Merit: 250
January 31, 2011, 09:48:45 PM
#18
If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

Maybe I'm wrong, but you are not paid for higher getwork/submit efficiency, but for finding valid blocks. So you are optimizing wrong thing Wink. Maybe you can get 100% getwork/submit ratio, but you are crunching old jobs. But it is your choice and your hashing power.


It's not crunching old jobs, it's crunching the same job all the way instead of moving on without potentially finding other valid answers.

It's more effective to crunch through the entire keyspace in (2^32)/MyHashRate seconds than it is to grab new getworks every 5 seconds and potentially not find an answer at all because I'm only checking a very small portion of the keyspace before moving on to the next getwork.

Say I get 50 million hashes a second on my GPU.
2^32 / 50,000,000 = 86 seconds to process an entire keyspace.
If my askrate is set to 5 seconds, I'm only checking 5.82% of each keyspace before moving on and assuming the getwork holds no answers.

You're potentially ignoring 94.18% of answers. Numbers obviously vary based on the speed of the GPU, but for a 5 second askrate to be effective, you would need 859 Million Hashes/s to process a keyspace of a single getwork, and even then the way m0mchills code is written, once it finds the first answer, it moves on to the next getwork anyway. This is flawed.


sr. member
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
January 31, 2011, 09:46:30 PM
#17
This brings up the question; If some getwork() simply do not have answers, is this due to a 2^32 keyspace not being an actual equal portion of the block, or is this due to overlapping 2^32 getworks?

There is no reason why there should be valid share in every getwork.

OK.

Quote
Quote
Do we share an overlap of 2^16 (arbitrary figure for the sake of example) in our respective keyspaces?

No overlapping, every getwork is unique. Read more how getwork() works. Especially the extranonce part.

That's what I thought.  Just wanted to make sure.

Quote
Quote
Meaning, am I getting invalid or stale because there are multiple people working on the same exact portions of keyspace? If so, isn't that an issue with the getwork patch?

No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.

Not true.  Look again.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

3 were accepted, then the 4th was invalid.  So if this was invalid because the block count went up by one (and work from old blocks are now discarded as invalid), how was the 5th answer from this now *old* work, why was it accepted?  Your logic doesn't work here BECAUSE the 5th answer was accepted!  And, the miner doesn't know that the block has increased until it makes the next getwork request. Wink


Quote
Quote
I've also asked m0mchill about the askrate, and it seems his answer to why the client fixes the askrate is basically a "fuck it if it doesn't find it quick enough". Although, he speaks more eloquently than that.

And he was true. Longer ask rate, more hashing overhead.

I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on.  My kHash/s doesn't change just because I'm on a different getwork.


Quote
Quote
He has also stated that, yes, we are ignoring large portions of the keyspace because we submit the first hash and ignore anything else in the keyspace, whether it's found in the first 10% of the keyspace, or the last 10% of the keyspace. He believes that this is trivial though, since you are moving on to another keyspace quick enough.

By skipping some nonce space, you don't cut your probability to find valid share/block. There is the same probability of finding share/block by crunching any nonce.

I agree that the probability of finding the share/block is the same for every getwork....IF YOU LOOK AT THE ENTIRE GETWORK.  Skipping over half the possibilities when just the first hash is found screws with your idea for a perfect 1:1 ratio for getwork/found hashes.  You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.

Quote
Quote
So, we're not searching the entire keyspace in a provided getwork. What if one of those possible answers you're ignoring is the answer to the block? You just fucked the pool out of a block.

Definitely not. It is only nice optimization for pool to continue hashing of current job, it save some network roundtrips, but it basically does not affect pool success rate.

That is true! 
I'm seeing the same amount of accepted hashes in a given hour with both miners.  The only thing this does is increase efficiency by:
1) Not ignoring nonces of the getwork when a hash is found
2) Not stopping part way through the getwork because the askrate was triggered.

legendary
Activity: 1386
Merit: 1097
January 31, 2011, 09:44:52 PM
#16
Quote
This sample shows that 3 answers were accepted, 1 invalid, then 1 more accepted.  Notice the Invalid answer is the same as the 1st accepted answer.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

Did you noticed that it is the same nonce? It is absolutely fine that second attempt was rejected. As I wrote before, I don't know the reason *why* there was second attempt. Afaik, nonces should be sorted from 0 to ffffffff, but maybe there is some simple answer behind mixing of nonces (more solving thread or whatever). Maybe it was only reupload after lost connection, whatever. Maybe m0mchil will response this, I don't know miner internals.

Quote
why was the 5th hash accepted?

because - as I already wrote - share can be rejected for more reasons. The reason for this rejection is that miner uploaded same nonce twice. It is not relevant to any bug in getwork, skipping some nonce ranges or any other weird stuff you are arguing here.

Quote
Likewise, can you explain to me exactly how ignoring a potentially large amount of hashes that could be the answer to the block doesn't effect the pool solving the block?

Because with skipping some nonce range, shit happen. In fact, you are skipping zillions of existing nonces. How can you live with that? :-)

Quote
I think FairUser has shown quite plainly that multiple valid answers can be found within the same getwork.

And I said it is nice, but not necessary optimization of miner. This only optimize network latencies, because miner is asking less often, but it does not improve probability that share will be found.
sr. member
Activity: 258
Merit: 250
January 31, 2011, 09:32:23 PM
#15
Quote
No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.

If this is the case, and the assumption is also true that we are "statistically unlikely" to find the same hash twice inside of the same getwork, can you explain to me what FairUser is seeing here:

Quote
This sample shows that 3 answers were accepted, 1 invalid, then 1 more accepted.  Notice the Invalid answer is the same as the 1st accepted answer.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()

Off of a SINGLE getwork, he received 3 valid answers, and then an invalid, followed by another valid. If this only happens when a new block is introduced between getwork and submission, why was the 5th hash accepted? Wouldn't it have been for the previous block? Shouldn't it have been rejected since the block hash on the answer didn't match the current block? Or is it possible that it's merely collision with SHA finding two identical (or similarly identical, taking difficulty into consideration) answers for the same getwork?

Likewise, can you explain to me exactly how ignoring a potentially large amount of hashes that could be the answer to the block doesn't effect the pool solving the block?

Your statistical analysis of the block data is accurate if you're taking into consideration that each getwork can only have one answer, and that the one answer being submitted is the only one that could be the correct answer for block. I think FairUser has shown quite plainly that multiple valid answers can be found within the same getwork.

I'll be happy to have it explained to me why I'm wrong to assume this though.
legendary
Activity: 1386
Merit: 1097
January 31, 2011, 09:22:52 PM
#14
If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

Maybe I'm wrong, but you are not paid for higher getwork/submit efficiency, but for finding valid blocks. So you are optimizing wrong thing Wink. Maybe you can get 100% getwork/submit ratio, but you are crunching old jobs. But it is your choice and your hashing power.
sr. member
Activity: 258
Merit: 250
January 31, 2011, 09:13:47 PM
#13
Quote
how did you calculated 20-30%? It isn't correct.

Yes, less often getwork update saves some resources, but will make mining much less effective.

I think he's basing the efficiency off of the number of submitted results to the number of getworks requested.

If 4 getworks are requested without having valid answers to submit back, and then on the 5th, it finds one answer and submits it back, then moves on without checking the remaining keyspace for more answers, you have a 20% efficiency.

However, if you request 4 getworks that don't result in answers, and then the 5th results in 5 answers, you have 100% efficiency.

So, over time (I'm sure it was looked at for more than just a few minutes) the averages from m0mchill's code would be ~20-30% where FairUser's fork would be closer to a 1:1 ratio, or 100% efficiency.

FairUser, feel free to correct me if my assumption is wrong here, but that seems the most logical way for me to break down what you're trying to say.
legendary
Activity: 1386
Merit: 1097
January 31, 2011, 09:10:55 PM
#12
This brings up the question; If some getwork() simply do not have answers, is this due to a 2^32 keyspace not being an actual equal portion of the block, or is this due to overlapping 2^32 getworks?

There is no reason why there should be valid share in every getwork.

Quote
Do we share an overlap of 2^16 (arbitrary figure for the sake of example) in our respective keyspaces?

No overlapping, every getwork is unique. Read more how getwork() works. Especially the extranonce part.

Quote
Meaning, am I getting invalid or stale because there are multiple people working on the same exact portions of keyspace? If so, isn't that an issue with the getwork patch?

No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.

Quote
I've also asked m0mchill about the askrate, and it seems his answer to why the client fixes the askrate is basically a "fuck it if it doesn't find it quick enough". Although, he speaks more eloquently than that.

And he was true. Longer ask rate, more hashing overhead.

Quote
He has also stated that, yes, we are ignoring large portions of the keyspace because we submit the first hash and ignore anything else in the keyspace, whether it's found in the first 10% of the keyspace, or the last 10% of the keyspace. He believes that this is trivial though, since you are moving on to another keyspace quick enough.

By skipping some nonce space, you don't cut your probability to find valid share/block. There is the same probability of finding share/block by crunching any nonce.

Quote
So, we're not searching the entire keyspace in a provided getwork. What if one of those possible answers you're ignoring is the answer to the block? You just fucked the pool out of a block.

Definitely not. It is only nice optimization for pool to continue hashing of current job, it save some network roundtrips, but it basically does not affect pool success rate.
Pages:
Jump to: