Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 753. (Read 5805728 times)

sr. member
Activity: 266
Merit: 254
ok so presumably a GPU has space for N hashes to be done concurrently in any one iteration.  How many iterations are done before results are returned?  If it's only one which equates to N hashes the time is minimal and the loss if any is probably below the threshold or reasonable measurement.  If you're saying it does many iterations of these N hashes in one batch then presumably there is a way to cancel them part way through if new data needs to be worked on?  If there's a way to cancel them there should be a way to interrupt them and get whatever results have been accumulate in that time.  If there's no way to interrupt and get results-so-far then I'd respectfully suggest the code is broken.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?
Discarding it is actually discarding work that is valid in all cases except on a Bitcoin new block LP.
Submitting it and then getting a 'stale' response is just as bad.
This is work you have done that would be valid if not for merged mining.
i.e. back to what I said about it earlier on ...

There is no such thing as incomplete but started work.  You don't make progress in mining.

Either you have a valid share or you don't.
If you don't then nothing has been lost.
If you do then submit it.

There is no concept of progress.  Each hash is completely independent and on average take 1/300,000,000th of a second to complete.
Again as I have ALREADY said above, each hash is NOT independent.
The GPU does a set of hashes and returns the results for that set.
That set can contain one or more shares and those shares could be deemed invalid/stale by a pool under the circumstances I have said above and not invalid/stale by the same pool if it was not merged mining.
You seem to have missed the point of how a GPU miner program actually does work.
donator
Activity: 1218
Merit: 1079
Gerald Davis
However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?
Discarding it is actually discarding work that is valid in all cases except on a Bitcoin new block LP.
Submitting it and then getting a 'stale' response is just as bad.
This is work you have done that would be valid if not for merged mining.
i.e. back to what I said about it earlier on ...

There is no such thing as incomplete but started work.  You don't make progress in mining.

Either you have a valid share or you don't.
If you don't then nothing has been lost.
If you do then submit it.

There is no concept of progress.  Each hash is completely independent and on average take 1/300,000,000th of a second to complete.
hero member
Activity: 807
Merit: 500
I think responding to responding to LongPolls makes sense given most everything I've read about pools on this thread and others.  Especially shads point about the alternative (non-new-block non-merged-mining) reasons for a new LongPoll.  That said, I am curious about Kano's last point, mixed with something I believe I know about cgminer, and DeathandTaxes point that stales don't hurt while good shares help.  I believe I know cgminer discards work without being told to once it gets to a certain age already in order to help lower stales, and I believe I know pools re-request unsolved work after some time period (which is why cgminer discards the aged work); assuming gpu mining works the way Kano described, wouldn't it be possible to submit that last round of work from the gpu even while discarding everything else in the queue for that pool, and assuming the point about stale shares not mattering is valid, wouldn't it make sense to do the same thing with the last solution from work being discarded for age if that isn't being done already?  I mean, sure, the stale % would go up, but would the number of good shares go down, and would the gpu cycle and packet transfer matter when the cpu presumably has cycles to spare waiting on the gpu and the effect of that additional packet transfer on any modern miner's communication network is presumably miniscule?  I had wondered this before, but never seen anything posted to make me think it was worth asking until this discussion.
legendary
Activity: 2576
Merit: 1186
Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.
Well, I think pools should continue to accept the old work until they've finished sending out the new, whenever possible... Eligius continues to accept shares against old work so long as they're possible to submit to the Bitcoin chain (ie, same prevblock)-- but the missing transactions that were in the longpolled work will get delayed by miners who don't update in a timely manner.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
I'm pretty sure the GPU does not report back as soon as it finds a share ...
If it did that then it would have to restart from that point each time it finds a share.

From my understanding of the process, a nonce range is setup and run and then the GPU CL code returns the list of shares found.
(re: the --worksize parameter)

When that job is complete, the miner program then runs another nonce range etc until the full nonce range is complete.
Then it will do the next queued result in the same way.

So while the CPU is waiting for the result of it's nonce range request, it can receive an LP.
After this happens, the current nonce range process will return it's result.
(of course after almost every LP this will happen since there will always be a nonce range in the middle of being processed)
This result (if it has shares) would be invalid/stale for a normal Bitcoin LP.
However with merged mining, it will be effectively be deemed invalid/stale each time an NMC LP appears also even though it would be valid without the extra requirement of merged mining.
sr. member
Activity: 266
Merit: 254
*scratches head*

you're not discarding anything you've done.  If you've found a solution previously with the same work you've already submitted it.  You have an exactly equal chance of find a new solution with either the new or the old work, but with the new work there's less chance it's stale.

Simply replacing work with another is a nanosecond operation.  Nothing is being discarded except some data which has less inherent value that what it's being replaced with.

Quote
However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?

You do understand there's no such thing as 'partially complete' work?  One work is good for about 4 billion hashes but just because you've done 2 billion doesn't mean your any closer to finding a share than if yr starting fresh.  Any given piece of work may have 0 solutions or it may have many.  There's not a fixed number that you get when you've finished running through the nonce range.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
Therefore, throwing out work blindly with a longpoll that doesn't have a corresponding detected block change, without adding any more command line options, is the most unobtrusive change that I can think of and I will do so in the next version.

For those who are concerned this will create some impact think of it this way.  You throwing out one work or the other anyway.  Current implementation is to throw out the new one, the suggested change is to throw out the old one.  Unless you believe for odd reason that the new one is somehow worse than the old one it's a no brainer.  Fresh is best.
Except if there was no merged mining, the old one would be OK except on a Bitcoin new block LP.

However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?
Discarding it is actually discarding work that is valid in all cases except on a Bitcoin new block LP.
Submitting it and then getting a 'stale' response is just as bad.
This is work you have done that would be valid if not for merged mining.
i.e. back to what I said about it earlier on ...
sr. member
Activity: 266
Merit: 254
Couldn't the workflow be something like this:

1) cgminer detects LP
2) cgminer continues to work on existing block header if it determines block hasn't changed (you already have this) *
3) cgminer issues new getwork()
4) cgminer continues to work on existing block header until pool provides new work. *
5) cgminer begins working on new work.


maybe all other pools do it differently or something coz I can't see why this is so hard.  When PSJ sends a longpoll it includes work.  I thought all LP implementations did this.  There's no delay before you can start on the new work after a longpoll because you've already been given it.  The only reason you'd need to issue a getwork is to refill yr queue.

If you get a LP response and you've detected a new block from another pool then behave as you normally would and mine the other pool until the current pool catches up.

edit:
p.s.  I see that conman's mood hasn't improved any since last I spoke to him so I just gonna leave this quote here again:

yes it will... I would personally suggest that miners who want this fixed pool together and put up a bounty.  I've looked through cgminer code and I'd guess the work involved in building it is in the same order as poolserverj (i.e. months).  I enjoyed writing poolserverj but one of the least fun things about it is supporting new systems like merged-mining which you don't have any interest in.  I'm fairly sure I wouldn't have bothered supporting MM if it weren't for tempting bounties waved in front of me.
sr. member
Activity: 266
Merit: 254
Therefore, throwing out work blindly with a longpoll that doesn't have a corresponding detected block change, without adding any more command line options, is the most unobtrusive change that I can think of and I will do so in the next version.

For those who are concerned this will create some impact think of it this way.  You throwing out one work or the other anyway.  Current implementation is to throw out the new one, the suggested change is to throw out the old one.  Unless you believe for odd reason that the new one is somehow worse than the old one it's a no brainer.  Fresh is best.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
The code commit to add merged mining also allows for EXACTLY this you are saying is bad. EXACTLY this.

Negative, Ghost Rider.

Merged mining adds a _single hash_ to the coinbase transaction and that single hash binds an infinite amount of whatever completely external stuff the miner wants.  This is not the same as people shoving unbounded amounts of crap in the blockchain.   Go look at the blockchain— bitcoin miners have been adding various garbage in the same location for a long time now.

Meanwhile any user, not just miners, could and have been adding crap to the blockchain already.  Where was your bleeding heart when people added several megabytes of transactions to the blockchain in order to 'register' a bunch of firstbits 'names'? 

Even if you don't buy all the arguments I gave about how merged mining is seriously beneficial to the long term stability and security of bitcoin, you should at least realizes that the mechanism channels a bunch of different bad uses into a least harmful form.

And really— asking for the feature to be removed? Thats— kinda nuts.  Anyone running a pool is already carrying a patched bitcoind for various reasons so it wouldn't stop them. Its already a problem that pool operators are slow to upgrade bitcoin.  Making them carry patches for this just means more they would have to port forward and more reason for them to stay on old bitcoin code.


Sigh
Reread what I said.
I'm talking about the change to bitcoind that was to allow for merged mining.
However, it allows the complete block to be generated, not just to specify the coinbase.
It DIDN'T just allow for a change to the coinbase, it is way more expansive than that.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

As I pointed out upthread there are many reasons to LP other than a new prev. LP does not mean that the pool _will_ reject work from before the LP. It might... it might not. It probably will _soon_ but that isn't a reason to stall.

Even in the case of a new prev,  I don't know what the pools do— but on my solo mining if I get a solution from my miners I reorganize and try to extend that, even if it comes late.   I'll only switch to mining an externally headed chain when its at least one longer.

(This provides the highest expected return:  If you _do_ find a solution you'll get two blocks and probably win even though you were late to the game.  If you don't find a solution before the network gets further then it doesn't matter).

Though we're really debating the angles dancing on the head of a pin.  Even if you stall for a half second every single LP, with LP per block you're only talking about losing 0.08% hashpower / energy at most.

See what I wrote above. We need asynchronous LPs to solve the issue correctly.
staff
Activity: 4284
Merit: 8808
Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

As I pointed out upthread there are many reasons to LP other than a new prev. LP does not mean that the pool _will_ reject work from before the LP. It might... it might not. It probably will _soon_ but that isn't a reason to stall.

Even in the case of a new prev,  I don't know what the pools do— but on my solo mining if I get a solution from my miners I reorganize and try to extend that, even if it comes late.   I'll only switch to mining an externally headed chain when its at least one longer.

(This provides the highest expected return:  If you _do_ find a solution you'll get two blocks and probably win even though you were late to the game.  If you don't find a solution before the network gets further then it doesn't matter).

Though we're really debating the angles dancing on the head of a pin.  Even if you stall for a half second every single LP, with LP per block you're only talking about losing 0.08% hashpower / energy at most.
donator
Activity: 1218
Merit: 1079
Gerald Davis
The merged mining shouldn't generate an LP, as they are implicitly synchronous.

Can you elaborate?  I am not sure what you are trying to say.  The optimum solution when a pool changes NMC blocks is the block the miners are working on as quickly as possible.  That doesn't mean existing work will be rejected just that new work is potentially more profitable.  The same situation can exist with high fee transaction.   Existing work is valid but new work has a higher EV.  LP advised miners to get new work, once they get it they start working on it.  No reason to idle.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

That is a bad assumption:
If LP issued on new NMC block then the BTC block is still valid.  Miner should continue until it has new work.
Potentially in future if LP is issued because pool has detected new high fee transaction then miner should continue until it has new work.
There may be unnamed reasons why pool could now or in future wish miner to change to new block header that doesn't require flushing queue.

The goal should be to maximize good shares, not reduce stale shares.

As an example
501 good shares & 1 stale share is worth more than 500 good shares & 0 stale shares.
All that really matters is the number of good shares.

No, you're still approaching it wrong. The merged mining shouldn't generate an LP, as they are implicitly synchronous. What we need is an asynchronous LP of some kind. Trying to hackjob it in the miner isn't correct.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

That is a bad assumption:
If LP issued on new NMC block then the BTC block is still valid.  Miner should continue until it has new work.
Potentially in future if LP is issued because pool has detected new high fee transaction then miner should continue until it has new work.
There may be unnamed reasons why pool could now or in future wish miner to change to new block header that doesn't require flushing queue.

The goal should be to maximize good shares, not reduce stale shares.

As an example
501 good shares & 1 stale share is worth more than 500 good shares & 0 stale shares.
All that really matters is the number of good shares.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

Couldn't the workflow be something like this:

1) cgminer detects LP
2) cgminer continues to work on existing block header if it determines block hasn't changed (you already have this) *
3) cgminer issues new getwork()
4) cgminer continues to work on existing block header until pool provides new work. *
5) cgminer begins working on new work.

In steps 2 & 4 is a share is found prior to step 5 then submit it anyways.  It may result in a stale but stales don't hurt miner.  They don't reduce the # of good shares.

i.e. say miner has 500 shares and finds a share in step 2 or 4
if share is good miner now has 501 shares = 1 share gained.
if share is bad miner still has 500 shares and 1 stale = nothing lost

legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

"Dr! Dr! It hurts when I do _this_."
"Then don't do that!"

Don't trow out work until you have replacement work.

Maintain a priority queue, work against 'new' blocks is highest priority.... If there isn't any of that, take what you have.  As has been mentioned, new prev is not the only reason for new work to be issued.

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.
staff
Activity: 4284
Merit: 8808
Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

"Dr! Dr! It hurts when I do _this_."
"Then don't do that!"

Don't trow out work until you have replacement work.

Maintain a priority queue, work against 'new' blocks is highest priority.... If there isn't any of that, take what you have.  As has been mentioned, new prev is not the only reason for new work to be issued.
staff
Activity: 4284
Merit: 8808
The code commit to add merged mining also allows for EXACTLY this you are saying is bad. EXACTLY this.

Negative, Ghost Rider.

Merged mining adds a _single hash_ to the coinbase transaction and that single hash binds an infinite amount of whatever completely external stuff the miner wants.  This is not the same as people shoving unbounded amounts of crap in the blockchain.   Go look at the blockchain— bitcoin miners have been adding various garbage in the same location for a long time now.

Meanwhile any user, not just miners, could and have been adding crap to the blockchain already.  Where was your bleeding heart when people added several megabytes of transactions to the blockchain in order to 'register' a bunch of firstbits 'names'? 

Even if you don't buy all the arguments I gave about how merged mining is seriously beneficial to the long term stability and security of bitcoin, you should at least realizes that the mechanism channels a bunch of different bad uses into a least harmful form.

And really— asking for the feature to be removed? Thats— kinda nuts.  Anyone running a pool is already carrying a patched bitcoind for various reasons so it wouldn't stop them. Its already a problem that pool operators are slow to upgrade bitcoin.  Making them carry patches for this just means more they would have to port forward and more reason for them to stay on old bitcoin code.

Jump to: