Pages:
Author

Topic: [ANN] Stratum mining protocol - ASIC ready - page 28. (Read 146083 times)

sr. member
Activity: 406
Merit: 250
LTC
Over time, the ratio between gw with a solution and gw with no solution on diff > 1 (diff 8 ) is 5% less than same ratio on diff 1. I have 3 workers, 2 at diff 1 with same ratio, one trough proxy which is moved incrementally after connect to diff 8, its ratio is 5% less than diff 1. I want to use stratum to save pool resources and I assume ratio difference its from vardiff alg. It may also be caused by some relationship with network latency, since diff one workers are connected to DE server (Europe). However, I think most likely is from vardiff, so I would like at least to test same worker at diff 1 trough stratum.
legendary
Activity: 1386
Merit: 1097
Guys, pls, how to set difficulty client side without hacking the proxy? Higher diff set pool side eats 5% of my hashing power.

If pool supports it, there should be some settings of required difficulty on pool profile. But I don't have such option (yet) on the profile and afaik btcguild is the same.

Btw can you elaborate more on "eats 5% of my hashing power" sentence? How is it possible?

Higher difficulty means that share submission rate is lower, but the weight of every submitted share is proportionally higher. With difficulty 5, you'll submit 5x less work, but you'll be credited by 5 shares for every submitted share. Except that a bit higher variance in submitting shares (which is depending on implementation of particular pool), there's no need to think that "higher difficulty is eating my hashpower".
sr. member
Activity: 406
Merit: 250
LTC
Guys, pls, how to set difficulty client side without hacking the proxy? Higher diff set pool side eats 5% of my hashing power.
legendary
Activity: 1386
Merit: 1097
kano, how many pool software did you implement? How many of them have thousands concurrently connected clients?

I'm quite bored with arguing to you, because everytime you use some new weird arguments. Most of your comments in last message are irrelevant in some way. I'm really not going to calculate CPU cycles for you, it is simply out of scope of this discussion.

Edit:
However, if you're really interested, there's pseudocode of both solutions:
Code:
for diff in waiting_notifications: # Iterates only over connections which needs new difficulty
    diff.send()

job = prepare_job()
job.broadcast() # Send same packet to all connected client

Code:
for client in iterate_clients: # Iterates over all connected clients every time
    job = prepare_job(client.difficulty) # Do custom serialization
    job.send() # Send one packet

Do you see the difference?

http://www.forbes.com/sites/chrisbarth/2011/12/29/want-to-make-good-decisions-avoid-mount-stupid/
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
1. There's major problem with bundling difficulty with job definition. Stratum is designed to have the same job message for every subscribed miner. Putting difficulty, which is connection-specific, directly to this message will break whole design. This is not only some minor advantage, it is like having job broadcast 10000x faster for 10000 connected miners just by decoupling job definition and difficulty, because server don't need to serialize the same message again and again, just with one changed number. 10000x faster broadcasts is potentially 10000x lower stale ratio for miners.
Sigh ...
You already need to process each user separately since you need to know each user before you can send them work and each has a separate id.
You already know for each user the difficulty you want them to process - since you said you send it before anyway.
You must keep independent difficulty per user in the pool.

........ so why is adding that difficulty string (in your words - two changed number - instead of one) on the end of the string being sent to the miner 100000x slower?
Bullshit.

Quote
2. Previously I described some edge cases, when sending just difficulty alone (without a job) can potentially improve some things. Maybe I wasn't absolutely clear, but this is not supposed to happen normally, but in extremely corner cases. When I'll choose between losing few seconds of work of flooding miner (like 500+ shares per second) and overall pool stability, I'll choose pool stability. Again, almost all miners will gain from this decision, because quick and reliable pool is in the interest of all participants.
So your pool is unable to generate a new notify quickly for one person in less than a fraction of a second? ... that's a worry ...
I was hoping that generating notify data was pretty quick ... since it's almost exactly the same as how pools work already doing that for each user every time they need work ...
Your 'special' case is for when a user first connects and has the wrong difficulty - so the pool has to do the same old work it used to have to do ... once for each user when they first connect ...
Or you could even be smart and remember for each worker what they were last time they connected and thus it would be a minority of users who would connect at vastly different hash rates to the last time they connected ...

Quote
3. As I stated many times, it is *expected* to send mining.set_difficulty and mining.notify together in standard case. And because mining.set_difficulty don't restart jobs AND pool sends set_difficulty immediately before notify, losing a milisecond of miner work is impossible.
Unless you happen to write crappy miner software ... the miner is always mining.
So it will always be mining when you send a "set_difficulty"
Since you are changing the rules during mining a valid share, the miner will lose work if that is a standalone "set_difficulty"
So ... either you don't send them standalone ... and thus why have them separate
... or you do send them standalone and the miner has an expected % chance of losing work.

Quote
So for standard case, all this discussion is about these two options:

Current Stratum protocol:
Code:
{"id": null, "method": "mining.set_difficulty", "params": [NEW_DIFFICULTY]}
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION]}

Alternative proposal:
Code:
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION, NEW_DIFFICULTY]}

As you can see, we're making a mountain out of a molehill, because in standard case there's no difference, except that current proposal is more flexible and much faster in real world.
How is it much faster?
Why is it 100000x faster to send a string as to add 20 bytes to the end of the string and send it?
Do you manually add those 20 bytes each time using a keyboard or does a 3GHz processor do it?

Your 'flexibility' loses shares for miners.
Yes it doesn't lose anything for the pool - just the miners ... so I guess that doesn't matter to you.
legendary
Activity: 1386
Merit: 1097
You're right. My biggest mistake is to try to explain to the deep, from all views and advocate every corner case. That's definitely a problem, because then the most important message from my over-complicated explanations are lost. I'll try to sumarize it once again:

1. There's major problem with bundling difficulty with job definition. Stratum is designed to have the same job message for every subscribed miner. Putting difficulty, which is connection-specific, directly to this message will break whole design. This is not only some minor advantage, it is like having job broadcast 10000x faster for 10000 connected miners just by decoupling job definition and difficulty, because server don't need to serialize the same message again and again, just with one changed number. 10000x faster broadcasts is potentially 10000x lower stale ratio for miners.

2. Previously I described some edge cases, when sending just difficulty alone (without a job) can potentially improve some things. Maybe I wasn't absolutely clear, but this is not supposed to happen normally, but in extremely corner cases. When I'll choose between losing few seconds of work of flooding miner (like 500+ shares per second) and overall pool stability, I'll choose pool stability. Again, almost all miners will gain from this decision, because quick and reliable pool is in the interest of all participants.

3. As I stated many times, it is *expected* to send mining.set_difficulty and mining.notify together in standard case. And because mining.set_difficulty don't restart jobs AND pool sends set_difficulty immediately before notify, losing a milisecond of miner work is impossible.

So for standard case, all this discussion is about these two options:

Current Stratum protocol (Edit: both messages are sent together):
Code:
{"id": null, "method": "mining.set_difficulty", "params": [NEW_DIFFICULTY]}
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION]}

Alternative proposal:
Code:
{"id": null, "method": "mining.notify", "params": [JOB_DEFINITION, NEW_DIFFICULTY]}

As you can see, we're making a mountain out of a molehill, because in standard case there's no difference, except that current proposal is more flexible and much faster in real world.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Slush, the miners are concerned about the potential losses by moving to stratum as well as the gains. The miners know about the possible gains and are querying the possible losses. While I will do whatever I can within the mining software to minimise these, you need to explain why this compromise is there as you are trying ultimately to convince miners to move to this protocol.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire.  What difficulty applies to that share?

Share will be accepted regards to difficulty on pool side. If submitted share has difficulty higher than "new difficulty just sent by pool", it will be still accepted, even if it has been generated by miner who known the old difficulty. Difficulty is not a part of job source data, so no work is lost on roundtrips, because share submitted with previous difficulty still can be accepted!

...
You conveniently ignored half the answer Tongue
Seriously slush - why would you do that?
(which is why I'd given up responding above)

But since you mentioned half the answer - I'll tell you what you ignored:

If the submitted share was sent to the pool coz it was higher than the difficulty at the time it was generated, but has difficulty lower than "new difficulty just sent by pool", it will be rejected.
Since the difficulty is not an attribute of the work.

Thus the pool throws away that piece of work the miner did ... so the miner can lose work on a difficulty change ... and as I already pointed out - there's an expected % that's easy to calculate.

Edit: there's also the reverse situation:
When the difficulty drops, the miner will NOT send a share with a lower difficulty until it knows about the lower difficulty - which will be some time AFTER the pool has decided to allow lower difficulty.
Thus there is again an amount of work lost here also.
legendary
Activity: 1386
Merit: 1097
Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire.  What difficulty applies to that share?

Share will be accepted regards to difficulty on pool side. If submitted share has difficulty higher than "new difficulty just sent by pool", it will be still accepted, even if it has been generated by miner who known the old difficulty. Difficulty is not a part of job source data, so no work is lost on roundtrips, because share submitted with previous difficulty still can be accepted!

Quote
(The whole "every miner gets the same message" thing is unfortunate too. From what I can tell it makes it impossible to create a Stratum proxy that distributes work to several different upstream servers.)

"every miner gets the same message" is *major* improvement in that whole thing. Losing this feature will kill the whole concept. Can you please describe your feature request (?) a bit more? How it is supposed to work? If miner want to work for more pools, he can use more stratum connections and use some round robin strategy for generating jobs locally. Maybe I don't see your point.
hero member
Activity: 686
Merit: 564
1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.

Yes. That's the problem. Suppose that the pool sends out a difficulty change, and then - at the same time - the miner submits a share, so that the two messages cross on the wire. What difficulty applies to that share? The pool and the miner have completely different views on this. In fact, pretty much the only safe time to change difficulty is when you're sending out a new item of work, because otherwise there's no way for the miner and the pool to prove to each other that a given share was meant to have a particular difficulty associated with it.

(The whole "every miner gets the same message" thing is unfortunate too. From what I can tell it makes it impossible to create a Stratum proxy that distributes work to several different upstream servers.)
legendary
Activity: 1386
Merit: 1097
I just released version 0.9.0, introducing experimental support for socks5 proxy and native support for Tor. If you don't need these features, you don't need to update from 0.8.6.

For mining over Tor, you need Tor client running locally and then just run "mining_proxy.exe --tor". Proxy itself configures socks5 settings, pool URL and port. All these settings can be defined manually, this is just a shortcut for my pool.
legendary
Activity: 1386
Merit: 1097
I found one bug almost instantly after release, so I removed EXE of 0.8.5 from github and released 0.8.6 right now.

If you already downloaded 0.8.5 (four people did so) and you don't see any issue after you start miners, then you don't need to update. This bug was related to situation when miner doesn't propagate x-miner-extensions in HTTP header.

I think you meant 0.8.6.
hero member
Activity: 497
Merit: 500
I think you meant 0.8.6.
legendary
Activity: 1386
Merit: 1097
I just released proxy version 0.8.5. It fixes some "unhandled errors" reported by console and also implements "midstate" getwork extension which speeds up getwork when requested by modern getwork miner.
legendary
Activity: 1386
Merit: 1097
So you are now saying that they should be joined to stop losing work? Smiley

Yes, I'm saying that it is possible. I'm not hiding fact that it is possible in some cases to lose work. But it is not necessary. Depends on agression of the pool, how much he accept overflooding by shares.

As I described in one of first posts about this, my implementation on the pool has two modes - minor difficulty changes are propagated together with new job definition, so no work is lost for 99.9% users. But when some really really strong miner joins with diff1 (it don't use mining.suggest_difficulty on startup), pool will send him new difficulty immediately, without waiting to new job definition (which is, as I already stated, generated periodically for all connected miners). In this case few seconds of the work is really lost, but it prevents pool from DoS. And all this happen only in edge case (really really strong miner, not so kind to suggest higher difficulty by himself).

Quote
However, if you can't understand those simple calculations then I guess there's not much point in this discussion.
Either tell me what is 'wrong' with them, or try to understand them.

Come on, I understand these calculations. I just stated that they're irrelevant, not wrong. Please don't became another troll on this forum :-P.

Quote
Quote
Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.
Well ... you have forced them to be two separate messages.

I "forced" them to be separate messages because of arguments which I already gave you.

Quote
The problem is the protocol means throwing away shares - and your client implementations will be throwing away these shares.

Not necessary, as I described many times already. set_difficulty CAN be send together with mining.notify.

Quote
Just coz you ignore that they are throwing them away (or don't see them - more likely) doesn't mean it doesn't happen.

Can you please summarize the reasons for having it together in one message, please? I'm serious in this. I probably lost the track on all reasons during this discussion.

Quote
Miners do not go idle, doing nothing, ever, unless they are programmed badly - every time a difficulty increase appears the miner will be working on something at the old difficulty

Please correct me if I'm wrong, but that worker is not working on the difficulty, it is working on the job. That job would be generated regardless of current difficulty. Share then can be discarded by "output control", without losing any real job. I don't see any issue with this. Mining "core" don't need to know current difficulty. Generating share below current target is not lost work, it is just low difficulty work.

Quote
... oh and as I've implied the obvious already: network transfers take a VERY long time when compared to hashing devices or even CPUs ...

I was asking you already on this, but where's the relation between latency and difficulty? The worst case is that you'll see "low difficulty" response by the pool, when miner submit share exactly at the time when difficulty get changed. But what is *real* problem with this? Btw this may happen only when difficulty is changed in another time than new job definition is broadcasted. If difficulty is corrected during clean_jobs=True job, this even won't happen, because this share would be stale as well (regardless the difficulty change).
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
kano, your calculations are irrelevant. It depends on the implementation on pool side and there are algorithms which can change difficulty without any losing work (i.e. sending it together with new job which already contains clean_jobs=True).
So you are now saying that they should be joined to stop losing work? Smiley

However, if you can't understand those simple calculations then I guess there's not much point in this discussion.
Either tell me what is 'wrong' with them, or try to understand them.
Quote
Quote
Since, as you say, difficulty is NOT an attribute of work

Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.
Well ... you have forced them to be two separate messages.
Quote
Can we please dig into troubles which you have in implementing set_difficulty as separate call? Is it something with threading? What exactly?
As I have already stated, it's not a difficulty in implementation - I'm not even implementing it.
It's a design issue that, in my opinion, sticks a wedge in the middle of expected results and a well designed implementation.
Quote
This high-level discussion is going nowhere, because I'm stating that two separate stratum client implementations don't have any problem with it and you're stating that you have serious issues with it, but you don't think it is implementation specific issue. The only obvious solution how to decide this problem is to dig into implementation details which you have with it. We should be more constructive.
The problem is the protocol means throwing away shares - and your client implementations will be throwing away these shares.

Just coz you ignore that they are throwing them away (or don't see them - more likely) doesn't mean it doesn't happen.
It quite clear why it happens as I have already stated.

Any work that is happening, when a difficulty increase appears, will have a higher % of being discarded due to the difficulty increase.
However, if difficulty was an attribute of work, these shares would not show up as being rejected or discarded due to being of too low difficulty (if your miner doesn't hide that) or would not disappear (if you miner hides that)

Miners do not go idle, doing nothing, ever, unless they are programmed badly - every time a difficulty increase appears the miner will be working on something at the old difficulty ... oh and as I've implied the obvious already: network transfers take a VERY long time when compared to hashing devices or even CPUs ...
legendary
Activity: 1386
Merit: 1097
kano, your calculations are irrelevant. It depends on the implementation on pool side and there are algorithms which can change difficulty without any losing work (i.e. sending it together with new job which already contains clean_jobs=True).

Quote
Since, as you say, difficulty is NOT an attribute of work

Difficulty is not an attribute of job definition, which is clearly true (what logical relevance is between job and dificulty?), but set_difficulty can be sent together with new job definition, just in separate message.

Can we please dig into troubles which you have in implementing set_difficulty as separate call? Is it something with threading? What exactly?

This high-level discussion is going nowhere, because I'm stating that two separate stratum client implementations don't have any problem with it and you're stating that you have serious issues with it, but you don't think it is implementation specific issue. The only obvious solution how to decide this problem is to dig into implementation details which you have with it. We should be more constructive.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
Quote
at the expense of the miner losing shares unnecessarily sometimes.

This may happen only on some pool implementations and only in some edge cases. Losing jobs is definitely not "by design" and expected.
It is expected with a % of every difficulty increase based on the amount of the old and new difficulty.
Since, as you say, difficulty is NOT an attribute of work, when you start a piece of work, the validity of the result can change at any time until you test the difficulty of the work after completing it.

The reality of any nonce found in a piece of work is that it is ALWAYS valid work at 1 difficulty.
Then of course the obvious follows on from that: at difficulty 2 - on average every 2nd nonce found will be valid work ... and so on for each difficulty level.

However, if the pool increases the difficulty, there is a % chance, related to the change in difficulty, that you will have to throw away work that was valid when you started it.

To work out that %
A difficulty increase from A to B means a ((1/A) - (1/B)) * 100% chance of losing work, due to the difficulty change, that would have been valid if the difficulty was instead an attribute of the work

e.g. a difficulty change from 1 to 2 will (obviously) means each nonce found in the work currently being worked on has a 50% chance of being discarded instead of accepted - even though it was valid when you started on it
a change from 1.8 to 2.2 means a 10% chance ...

It's nothing like "some edge cases" - guessing doesn't work too well ................
legendary
Activity: 1386
Merit: 1097
The fact that it isn't part of the work definition in your protocol is what creates the issues.

I can agree that it creates the issue in your implementation. It don't create any issue in other miners or in my proxy, which has support of the same protocol.

Quote
It's a separate, global per worker, independent piece of information according to your protocol.

Yes, it is independent information from job definition. As I said previously, there's no need for bundling it with job definition, it can be freely used independently.

Quote
Basically you are defining work that you will reject - and that you must reject, since the work returned cannot prove the difficulty that it was processed at - work difficulty is not encoded anywhere in the hash either (you left it out of the hash to gain performance ...)

Basically - you're right. In some edge cases, like connecting 2THash miner with difficulty 1 to the pool, it may happen that miner will waste first few seconds of the work. However I'm thinking about "mining.propose_difficulty" command, where miner will be able to propose some minimum difficulty in connection instead of "default" 1, so with proper implementation no waste will happen.

Quote
but has connectivity problems, and during that time they were sent a difficulty increase, they will lose work that was valid at the lower difficulty, bit not at the new difficulty. Late submission of work is not handled by the protocol in this case.

This is TCP, not HTTP. You cannot "lose" part of the data transmitted over the socket. You'll receive retransmissions of TCP packets (so you'll receive difficulty adjustment) or the socket will be dropped and the "late submission" won't be accepted in any way. So it's no argument here.

Quote
A difficulty change does indeed mean throwing away work that was valid prior to receiving the difficulty change ...

Every sane pool implementation will send standard retargeting command together with new job (and clean_jobs=True). So miner have to drop previous jobs anyway, so no work lost here.

Quote
since the work is missing the difficulty information at both the pool and the miner.

We're repeating here. You don't need to know target difficulty when you're creating work in miner, right?

Quote
The time from starting work, to it being confirmed, by the pool is quite long ... it includes the network delay from the miner to the pool and back ... which when hashing at 54GH/s using an ASIC device, is certainly the slowest part of the whole process, not the mining.

Not I probably don't understand your point (language barrier). How is network latency during share submission related to difficulty change?

Quote
This also means that even during normal connectivity, work will often be in transit already when a difficulty change is received

Most common case - yes. But not necessary, there're use cases when sending it separately has some advantage, as I described above.

Quote
You must already keep information valid per worker: the difficulty ... as well as a bunch more: like who they are and where they are, that must be looked at in order to send the work out.
You simply add the work difficulty to the information sent - rather than send it separately.
Your code MUST already process through levels to get from the job definition to sending it to a worker.

Why MUST? You didn't tell me the real reason why it MUST be known on the beginning of the job. Except that threading issue, which is implementation specific and can be solved easily.

Why Per worker? Difficulty is related to the connection, not per worker. Btw bundling difficulty to job definition don't change it either.

As far as I can say, me and m0mchil implemented the same thing already (me in proxy, m0mchil in poclbm) and we don't have such issues at all. All this sounds to me that your complains are related to the miner implementation.

Quote
... and suggesting that a software 'change' is a reason to not implement something is really bad form Tongue

From the architecture view, keeping it separately is much cleaner. Maybe some things lead to ugly hacks in some implementation, but optimizing *protocol* for some implementation *is* bad example of doing software architecture.


Quote
Adding a small amount of information per worker is a negligible hit on the pool software since the pool must already have that information per worker and it is simply added to the message, not a regeneration of the message.

Again, I don't understand your point here. Of course that pool knows difficulty per connection. But what is this related to your issue in retargeting? I was saying about fact that creating of *jobs* is pool-wide, which is different story.

Quote
I was looking for reasons and stating why I wanted them - I had heard none reasonable so far at that point Smiley

I think I'm giving you a lot of reasonable points. I'm just aware that you are too much focused to implementation in cgminer.

Quote
No it's not trickier in cgminer.
It's a performance hit due to making something global for all worker's work, yet the value can change at any time, it's not an attribute of the work according to the pool, yet in reality it indeed is.

How the hell can be lookup for single value (read only!) became a "performance bottleneck"?

Quote
Basic database 101 - 3rd normal form - 2 phase commit - and all that jazz Smiley

Don't teach me over database design, please ;-).

I'm all fine with this. Just make additional filtering of difficulty when you're sending shares to the network and I'll be happy :-).

Quote
It's simply a case of any miner that isn't brain dead and does use threading properly (like any good software has used for a VERY long time Smiley) to deal with work properly, has a locking issue dealing with the fact that with testing the validity of a share, the test definition can unknowingly change before the test starts (the pool sends the difficulty change) and the change can be known during the test, but before, the test completes (arrival of the difficulty change) and thus the result is no longer true (which will also not be rare when a difficulty change arrives)
It forces a global thread lock on access to the work difficulty information - since it is global information - you can't put it in the work details since the pool doesn't do that either.

Do you know term "over-optimization"? It seems to be this case. You can safely forget about the case of race condition when miner will receive new difficulty *exactly* at the same time when some thread is checking share validity. Nothing serious will happen if you send one, two or ten of low-diff shares in this case.

Quote
It forces a global thread lock on access to the work difficulty information

This catched my eyes. You don't have write-only locks in C/C++? :-P

Quote
It's discussing why the protocol should or shouldn't include the difficulty as part of the work information.

I hope I explained this clearly already. Job definition and difficulty are logically two separate things (although maybe it don't look like in your implementation). Short summary: Job may change without difficulty change, difficulty may change without job change. And job is the same for all miners on the poo, but difficulty is defined per-connection. Is this enough for not bundling them together?

Quote
However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain)

Yeah, this is valid point! Of course it is helping pool, I'm not hiding it! But I'm strongly against saying "this feature is helping pool" and "this another feature is helping miners". Both pool and miners have the same goal - have the highest real hashrate, lowest resource consumption and highest block reward. Period. There's no real reason to fight miners against pool ops. When the protocol gives tools to pool ops to drive resource consumption in some nice way, then miners will benefit from this also - by faster replies of server, lower stale rate and potentially by lower fees, because pool don't need to handle ugly peeks in performance like it is nowadays.

Quote
at the expense of the miner losing shares unnecessarily sometimes.

This may happen only on some pool implementations and only in some edge cases. Losing jobs is definitely not "by design" and expected.
full member
Activity: 171
Merit: 127
How about making difficulty re-target optional parameter for mining.notify?
Pages:
Jump to: