Pages:
Author

Topic: [ANN] Stratum mining protocol - ASIC ready - page 29. (Read 146166 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
September 30, 2012, 09:35:42 PM
However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain) at the expense of the miner losing shares unnecessarily sometimes.
This may well be the most important  part of the argument. If it's of no detriment to the pool, and is only the effort required to implement it, and it's of benefit to the miner, then I can only see an advantage. No need to set the protocol in stone at this early a stage. I'll implement whatever it is, but as I said earlier, I saw it kano's way.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
September 30, 2012, 07:14:18 PM
I think there is a change needed to the protocol ... interested in your opinion.
Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

I was thinking about this quite a lot when I designed calls and parameters and I think I can defend current protocol "as is":

1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.
The fact that it isn't part of the work definition in your protocol is what creates the issues.
It's a separate, global per worker, independent piece of information according to your protocol.

Basically you are defining work that you will reject - and that you must reject, since the work returned cannot prove the difficulty that it was processed at - work difficulty is not encoded anywhere in the hash either (you left it out of the hash to gain performance ...)

This means that if anyone generates a set of shares, but has connectivity problems, and during that time they were sent a difficulty increase, they will lose work that was valid at the lower difficulty, bit not at the new difficulty. Late submission of work is not handled by the protocol in this case.

A difficulty change does indeed mean throwing away work that was valid prior to receiving the difficulty change ... since the work is missing the difficulty information at both the pool and the miner.

The time from starting work, to it being confirmed, by the pool is quite long ... it includes the network delay from the miner to the pool and back ... which when hashing at 54GH/s using an ASIC device, is certainly the slowest part of the whole process, not the mining.
This also means that even during normal connectivity, work will often be in transit already when a difficulty change is received

Quote
2. Job definition in broadcast is the same for everybody. Maybe this is not so obvious, so I repeat it :-) : That message is composed one-time, but broadcasted to everybody. Including single connection-specific variable will break the design completely, because pools will need to compile the message per-connection, which is major performance downside.
No - not at all.
You must already keep information valid per worker: the difficulty ... as well as a bunch more: like who they are and where they are, that must be looked at in order to send the work out.
You simply add the work difficulty to the information sent - rather than send it separately.
Your code MUST already process through levels to get from the job definition to sending it to a worker.
... and suggesting that a software 'change' is a reason to not implement something is really bad form Tongue
Adding a small amount of information per worker is a negligible hit on the pool software since the pool must already have that information per worker and it is simply added to the message, not a regeneration of the message.

Quote
At current protocol design miner will receive all connection-specific values by other channels (at this time they're "coinbase1" in mining.subscribe and "difficulty" in mining.set_difficulty).

Quote
The only argument I've heard so far for having it separate is that is saves some bytes per "notify"

Who told you so? Not me, correct? :-)
Yep - not you.
I was looking for reasons and stating why I wanted them - I had heard none reasonable so far at that point Smiley

Quote
Quote
"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.

I understand from your description, that you need to know target difficulty when you're creating the job. Well, but this depends on implementation of the miner, you can start a job without knowing "target" difficulty. Technically there's no reason to "restart" anything, you can just filter out low-diff shares on the output (some miners are doing it this way). I understand this may be tricky in cgminer, because of your heavy threading architecture and locking issues, but this definitely isn't the reason itself why protocol should be changed.
No it's not trickier in cgminer.
It's a performance hit due to making something global for all worker's work, yet the value can change at any time, it's not an attribute of the work according to the pool, yet in reality it indeed is.

Basic database 101 - 3rd normal form - 2 phase commit - and all that jazz Smiley

It's simply a case of any miner that isn't brain dead and does use threading properly (like any good software has used for a VERY long time Smiley) to deal with work properly, has a locking issue dealing with the fact that with testing the validity of a share, the test definition can unknowingly change before the test starts (the pool sends the difficulty change) and the change can be known during the test, but before, the test completes (arrival of the difficulty change) and thus the result is no longer true (which will also not be rare when a difficulty change arrives)
It forces a global thread lock on access to the work difficulty information - since it is global information - you can't put it in the work details since the pool doesn't do that either.

Quote
Quote
Also, how is this handled at the pool?

There are many ways how to handle difficulty on the pool and there's no recommended solution so far.
Just thought I'd leave that one as it stands Smiley

-------

... and just in case it wasn't obvious about the point of this discussion ...
The point of my discussion is not to say that the current protocol cannot be implemented - it will be - and it will include these issues if they are not changed.
It's discussing why the protocol should or shouldn't include the difficulty as part of the work information.

-------

However, I will also add, that this part of the protocol definition seems to be directly aimed at helping the pool (but in reality very little performance gain) at the expense of the miner losing shares unnecessarily sometimes.
legendary
Activity: 1386
Merit: 1097
September 30, 2012, 03:19:12 PM
I think there is a change needed to the protocol ... interested in your opinion.
Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

I was thinking about this quite a lot when I designed calls and parameters and I think I can defend current protocol "as is":

1. The major architectural reason for not including share target into the job broadcast message is: Share target is NOT a part of job definition ;-). Value of target is not related to the job itself, but it is bounded to connection/worker and it's hashrate. Also the difficulty can technically change at any time, not necessary during new job broadcast. For example my retargeting algorithm (not on production server yet) is sending set_difficulty immediately when pool detects dangerous overflooding by the miner just to stop the flood without sending him new job, because jobs are generated pool-wide in some predefined interval.

2. Job definition in broadcast is the same for everybody. Maybe this is not so obvious, so I repeat it :-) : That message is composed one-time, but broadcasted to everybody. Including single connection-specific variable will break the design completely, because pools will need to compile the message per-connection, which is major performance downside.

At current protocol design miner will receive all connection-specific values by other channels (at this time they're "coinbase1" in mining.subscribe and "difficulty" in mining.set_difficulty).

Quote
The only argument I've heard so far for having it separate is that is saves some bytes per "notify"

Who told you so? Not me, correct? :-)

Quote
"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.

I understand from your description, that you need to know target difficulty when you're creating the job. Well, but this depends on implementation of the miner, you can start a job without knowing "target" difficulty. Technically there's no reason to "restart" anything, you can just filter out low-diff shares on the output (some miners are doing it this way). I understand this may be tricky in cgminer, because of your heavy threading architecture and locking issues, but this definitely isn't the reason itself why protocol should be changed.

Quote
Also, how is this handled at the pool?

There are many ways how to handle difficulty on the pool and there's no recommended solution so far.
420
hero member
Activity: 756
Merit: 500
September 29, 2012, 03:37:38 PM
what are the technicals what does this mean though, difficulty-1?

https://en.bitcoin.it/wiki/Difficulty


well if I make difficulty-5000000000 woudln'ti make much more bitcoins than everyone
legendary
Activity: 1596
Merit: 1100
September 29, 2012, 03:11:02 PM
what are the technicals what does this mean though, difficulty-1?

https://en.bitcoin.it/wiki/Difficulty
420
hero member
Activity: 756
Merit: 500
September 29, 2012, 02:59:06 PM
Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?

Same payout, less server <-> miner traffic.

what are the technicals what does this mean though, difficulty-1?
legendary
Activity: 1596
Merit: 1100
September 29, 2012, 02:39:36 PM
Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?

Same payout, less server <-> miner traffic.

420
hero member
Activity: 756
Merit: 500
September 29, 2012, 02:18:25 PM
Surprising that mining has not yet moved away from difficulty-1 mining.


what is the significance?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
September 29, 2012, 05:51:14 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.
No, the word is not 'accept' it's 'copy' - reality please - you seriously have some problem with reality ...

The code will be more than reasonable, the problem for you will be how 'unreasonable' you make it after you copy it and if you can even implement it in your cgminer clone.

Since you've already made it pretty clear that it's too difficult for you to do - it's been obvious for a while you'll just copy it from cgminer

Like most of the internals of cgminer, your clone is certainly no better, due to the fact that you copy the code directly from cgminer, but often worse, due to the changes you've made without any guidance or feedback ... that you seriously need ... (you also clearly have issues when it comes to software performance)

Even your statement above, about my comment regarding the Stratum protocol, proves you can't see past your own shallow understanding of something and realise that you do not know everything and your know-it-all attitude is your own downfall since you clearly do NOT know-it-all.
It is not a stupid change suggestion, and it is quite easy to see that it produces an unnecessary implementation issue - a clear timing issue due to the fact the information has been separated - this yet again shows how severely unable you are to test code properly - since the problem is quite apparent as soon as you understand that no code takes 'zero' time to execute (as you once argued with me about code that you said does take zero time to execute Tongue) and worse as soon as a network delay is added to the issue it make it many times worse.
legendary
Activity: 2576
Merit: 1186
September 29, 2012, 05:15:38 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.

It will be interesting to see how you alter his code when you copy it into your cgmerge branch to suit your needs.
I agree.
hero member
Activity: 988
Merit: 1000
September 29, 2012, 05:11:52 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.

It will be interesting to see how you alter his code when you copy it into your cgmerge branch to suit your needs.
legendary
Activity: 2576
Merit: 1186
September 29, 2012, 05:04:15 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
I was responding to Kano's request to change the Stratum protocol in a stupid way because of some implementation problem he imagines exists.

As I said before, if someone else writes the code, I will (probably - ie, if the code is reasonable) merge it into BFGMiner.
Con is capable of producing reasonable code, so I expect that when he finishes it, I will accept it into BFGMiner.
hero member
Activity: 988
Merit: 1000
September 29, 2012, 04:59:46 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.

Does that indicate you won't be copying ("merging") ckolivas' stratum code into your spork?
or
Will you'll be developing your own code or waiting for someone else to copy ("merge") from?
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
September 29, 2012, 01:58:56 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.
Well - that immediately means it must be defcon 4 critical if you don't think it's necessary Smiley
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
September 29, 2012, 01:47:58 AM
P.S. ckolivas doesn't care about this - he ignored me Smiley
To be fair, I had exactly the same discussion with Eleuthria but I had bigger fish to fry and stopped caring about it. He or slush can come and defend why they think it's ok. I'll work around the protocol whatever it is.
legendary
Activity: 2576
Merit: 1186
September 29, 2012, 01:36:40 AM
I wouldn't care about it either. If one is implementing Stratum in the first place, including difficulty in each job really doesn't help any.
legendary
Activity: 4634
Merit: 1851
Linux since 1997 RedHat 4
September 29, 2012, 01:20:36 AM
Meanwhile ... Smiley
I think there is a change needed to the protocol ... interested in your opinion.

Suggestion: you should completely remove "mining.set_difficulty" and put difficulty in "mining.notify"

The main issue I see is that it overly complicates handling difficulty by posing issues that need to be handled due to having information in 2 messages

The only argument I've heard so far for having it separate is that is saves some bytes per "notify" - but the whole argument about stratum is that you don't need to send data very often - so who cares if you save ~1kbyte an hour per worker? ...
(less than 20 bytes x 1 "notify" per minute)
... and difficulty changes (requiring a whole new "notify") would not be an overhead since you would simply send the new difficulty change as part of the next "notify" as per normal (I can think of no urgency to send a difficulty change before the next notify)

"set_difficulty" seems to represents a work restart in exactly the same way as a "notify" does.
In fact it seems to be similar (but not the same) level as a "notify" with "Clean Jobs"=true
Any work you are working on is no longer based on the definition when the work was started
If the difficulty actually went down, you may end up throwing away work that is now valid since the pool will accept the lower difficulty work at the time is sends out the "set_difficulty" but the miner has to deal with receiving and processing that before it will allow the lower difficulty work
If the difficulty went up then your work that was valid at the time it was started may no longer be valid, even though only a "Clean Jobs"=true (i.e. an LP) should make the work invalid

This will also mean an issue in the miner that may have been over looked so far?:
The code that deals with checking the difficulty must all be exclusively thread locked since the difficulty is not at the work level, it is at a global level (with current "getwork", difficulty is a direct attribute of the work)

Also, how is this handled at the pool?
It certainly represents a loss of any lower difficulty work that was sent to the pool after the pool has sent the "set_difficulty"
(remember, pool<->miner is not instant ... in fact it is quite a long time when counting hashes ...)

The problem I see is that, having them as 2 different messages means it depends on the pool implementation as well as the miner implementation what will actually happen, when a "set_difficulty" is sent. It also complicates that process unnecessarily for no real gain.

P.S. ckolivas doesn't care about this - he ignored me Smiley
legendary
Activity: 1386
Merit: 1097
September 27, 2012, 09:41:24 AM
It doesn't mean stratum is a bad thing, it's just a shame that I put a lot of work into making cgminer scale to massive workloads pool overloads and I have to implement something else from scratch again.

Let me fix it for you ;-P.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
September 25, 2012, 11:00:23 PM
Quote
[02:36] <@conman> stratum is basically orthogonal to all the hard work I put into network scheduling D:
It doesn't mean stratum is a bad thing, it's just a shame that I put a lot of work into making cgminer scale to massive workloads and I have to implement something else from scratch again.
Pages:
Jump to: