This brings up the question; If some getwork() simply do not have answers, is this due to a 2^32 keyspace not being an actual equal portion of the block, or is this due to overlapping 2^32 getworks?
There is no reason why there should be valid share in every getwork.
OK.
Do we share an overlap of 2^16 (arbitrary figure for the sake of example) in our respective keyspaces?
No overlapping, every getwork is unique. Read more how getwork() works. Especially the extranonce part.
That's what I thought. Just wanted to make sure.
Meaning, am I getting invalid or stale because there are multiple people working on the same exact portions of keyspace? If so, isn't that an issue with the getwork patch?
No. It may be because another bitcoin block was introduced in meantime between getwork() and submit. Then share from old block cannot be candidate for new bitcoin block. Read my last posts in pool thread. By the way, this is not pool/m0mchil miner related, it is how bitcoin works.
Not true. Look again.
31/01/2011 06:46:34, Getting new work.. [GW:291]
31/01/2011 06:46:38, 41b15022, accepted at 6.25% of getwork()
31/01/2011 06:46:53, c597d2b5, accepted at 31.25% of getwork()
31/01/2011 06:46:59, 9babdadf, accepted at 50.0% of getwork()
31/01/2011 06:47:07, 41b15022, invalid or stale at 75.0% of getwork()
31/01/2011 06:47:11, 1ba08127, accepted at 87.5% of getwork()
3 were accepted, then the 4th was invalid. So if this was invalid because the block count went up by one (and work from old blocks are now discarded as invalid), how was the 5th answer from this now *old* work, why was it accepted? Your logic doesn't work here BECAUSE the 5th answer was accepted! And, the miner doesn't know that the block has increased until it makes the next getwork request.
I've also asked m0mchill about the askrate, and it seems his answer to why the client fixes the askrate is basically a "fuck it if it doesn't find it quick enough". Although, he speaks more eloquently than that.
And he was true. Longer ask rate, more hashing overhead.
I wouldn't call it "more" hashing overhead, since it's the same number of kHash/s regardless of *what getwork* it's on. My kHash/s doesn't change just because I'm on a different getwork.
He has also stated that, yes, we are ignoring large portions of the keyspace because we submit the first hash and ignore anything else in the keyspace, whether it's found in the first 10% of the keyspace, or the last 10% of the keyspace. He believes that this is trivial though, since you are moving on to another keyspace quick enough.
By skipping some nonce space, you don't cut your probability to find valid share/block. There is the same probability of finding share/block by crunching any nonce.
I agree that the probability of finding the share/block is the same for every getwork....IF YOU LOOK AT THE ENTIRE GETWORK. Skipping over half the possibilities when just the first hash is found screws with your idea for a perfect 1:1 ratio for getwork/found hashes. You can't ever expect to see (or find) the entire puzzle (the block) when you are choosing to ignore any part (the skipped hashes in a getwork) of that puzzle.
So, we're not searching the entire keyspace in a provided getwork. What if one of those possible answers you're ignoring is the answer to the block? You just fucked the pool out of a block.
Definitely not. It is only nice optimization for pool to continue hashing of current job, it save some network roundtrips, but it basically does not affect pool success rate.
That is true!
I'm seeing the same amount of accepted hashes in a given hour with both miners. The only thing this does is increase efficiency by:
1) Not ignoring nonces of the getwork when a hash is found
2) Not stopping part way through the getwork because the askrate was triggered.