Pages:
Author

Topic: Variable difficulty shares, can efficiency be improved for fast miners? (Read 6090 times)

legendary
Activity: 1512
Merit: 1036
The biggest load spike to a pool is pushing new work to everybody at the start of a new block. This can take several seconds of 100% CPU or network. If you've built up a pool server that can do this quickly and is strong enough to not be brought down by bad share flood attacks, everything else is random background CPU blips.
full member
Activity: 215
Merit: 100
Doesn't P2pool already support what you are discussing here?
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU, regardless of how vast they are.

This is perfectly correct. There are two forms of ntime rolling support, with different messages from the server:

X-Roll-NTime: Y
X-Roll-NTime: expire=N

The second form is more flexible and allows the server to tell the client how many seconds (N) forward it is allowed to increment the timestamp (ntime). The first form is the original roll-ntime which gives the client permission to increment ntime without limit. I recently looked at the source code of the latest version of a few miners and only DiabloMiner supported the "expire=N" form. Most miners will interpret the second form to mean they can roll ntime without limit, ignoring the N value.

Thank you for the detailed explanation. I see now why my idea is not useful in general, and how it would only have changed how often a worker would submit to a pool.

Don't dismiss >1 difficulty. It is quite obvious that it is a useful concept. Also, both fetching new work and delivering proofs of work you found are done through the "getwork" JSON-RPC function. It's a very bad API design, but "getwork" refers to both fetching new work AND handing in results. So yes, higher difficulty does mean fewer "getwork" requests, unless miners ignore the target they are given.

Yes, using >1 difficulty WILL reduce bandwidth and CPU load on the server. Coupled with ntime rolling you can reduce the load coming from the fastest miners by a lot.

It is true that getting new work takes a few bytes more bandwidth and some CPU cycles more than delivering work results. But that doesn't mean the second is insignificant. Take a look at this thread for examples of both types of request and response: https://bitcointalksearch.org/topic/getwork-protocol-what-are-the-rules-examples-51281

Luke-jr. and I set up a wiki page showing what the different miners and servers support: https://en.bitcoin.it/wiki/Getwork_support. Hassle your favorite developer to get better support. I added "expire=N" (2nd generation roll ntime) and ">1 difficulty" columns just now.

So yes, you can optimize the hell out of fast miners. The problem is slow miners. The noncerange extension could at least help a bit on the server CPU usage they cause, but it is not widely supported as you can see in the feature tables at the wiki page.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Thank you for the detailed explanation. I see now why my idea is not useful in general, and how it would only have changed how often a worker would submit to a pool. However, perhaps those with 50ghash could want to submit less often, and this is where the difference would be made. But on the other hand, submissions need next to zero bandwidth, so it is not a problem for either the miner or the pool.

And, as posted above, it sounds as though some miners submit all shares diff 1 and higher, regardless of the target diff. So no gain here either.

Again, thanks for the clear explanations, I am learning more about the Bitcoin protocol than ever before. The more I learn, the more I am able to boil it down into explainable chunks, while still being able to delve into detail when asked. The last time I presented Bitcoin to a tech-savvy person, it took close to 2 hours to run through the basics of the system, with the occasional deviation into details, even with my overly terse style of presentation. I hope to be able to present Bitcoin in the future in less time than that, while still maintaining a high level of understandability to the "explainee".
legendary
Activity: 1750
Merit: 1007
Just to build on DeathAndTaxes response regarding the storm of getworks after an LP.  Not only is the pool having to generate a getwork for every miner (an LP is sent by sending you a fresh getwork over the connection), it is also generating significant amounts of work because almost all miners will request 1 getwork for local queuing in addition to the active getwork.  On some miners, it is an even larger queue, so when an LP hits a pool may be preparing many thousands of getworks all at once for both the LPs and the subsequent extra requests.
donator
Activity: 1218
Merit: 1079
Gerald Davis
DeathAndTaxes, thank you, that was the key to my puzzle. I assumed that raising the difficulty of a share would lead to spending more time hashing for each getwork, causing fewer requests, whereas you say that the time spent is not related to difficulty? Correct me If I am wrong please...

Correct.

How it works is miner gets a block header (minus nonce)

miner adds nonce of 0, hashes it and checks if it is higher than pool's difficulty (not block difficulty).
If it is the miner submits it.  If it isn't the miner discards it.

THEN regardless the miner increments the nonce to 1 and does the same thing
...
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
increment, hash, check (and possibly submit)
....
4 billion iterations

nonce range is exhausted.  Miner requests new work via getwork.  At that point miner starts all over w/ nonce of 0.

With difficulty of 1 the miner submits 1 share per getwork (on average) w/ perfect efficiency.
With a difficulty of 200 (current p2pool difficulty) the miner finds and submits 1 share per 200 getwork requests.

Still the rate of getwork requests remains the same.  If miner is perfectly efficiency it is roughly one every (2^32)/(speed of GPU in hashes) seconds.

It is actually slow miners which are hard on server.  100 GH made up of 100 1GH GPU is pretty easy load but 100 GH made up of 5000 CPU is pretty rough.  The reason why is slow miners are inefficient due to the low likelihood of them finding a share before it is stale.  This means they make lots of getwork requests for each share submitted.

NNtimeRolling can be used to reduce the number of getworks by allowing the miner to increment timestamp locally.
A hybrid (aka split) pool can reduce the getwork load on server to zero by having miners generate blockheaders locally.  While p2pool does this it could also be used by a "traditional pool".

Quote
In addition, each LP results in a storm of getworks from miners that are discarding the current work and starting fresh, is this correct? Or have I misunderstood yet another important thing on how this works? Wink

Correct.  All work issued to miners is now worthless so pool server will issue an LP w/ new work.  The miner locally discards any queued up work and begins to process the new work.  Obviously the pool will need to recalculate blockheaders for every miner in the pool.  This can be computationally intensive.  Some pools optimize pool efficiency by issuing LP to fastest miner's first.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.

Changing difficulty doesn't make a miner any less of a getwork hog.

The problem is that the nonce is only 32 bit number.  So there are only 4 billion hashes per nonce.  A 1 GH miner will need a new getwork every 4 seconds.  A 10 Gh miner will need a new get work every 0.4 seconds.  If only Satoshi had made the nonce range 64bit.

Still ntime rolling can be used to significantly reduce the number of getworks on fast miners.  If the pool allows an ntime rolling of 5 seconds then no matter how fast a GPU becomes it only needs a new getwork every 5 seconds.  When a GPU finishes a nonce range it simply increments the timestamp and hashes it again.  When the ntime rolling expires it gets new work.

Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU.


DeathAndTaxes, thank you, that was the key to my puzzle. I assumed that raising the difficulty of a share would lead to spending more time hashing for each getwork, causing fewer requests, whereas you say that the time spent is not related to difficulty? Correct me If I am wrong please...

In addition, each LP results in a storm of getworks from miners that are discarding the current work and starting fresh, is this correct? Or have I misunderstood yet another important thing on how this works? Wink
donator
Activity: 1218
Merit: 1079
Gerald Davis
I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.

Changing difficulty doesn't make a miner any less of a getwork hog just less of a share submitter.

How pool mining works is (and this is simplified version modern miners take extra measures to improve efficiency)
1) Miner issues a getwork
2) Pool provides miner with block header (minus nonce)
3) The miner starts w/ a nonce of 0 adds it to rest of pool header and hashes it. It then increments the nonce and hashes again.
4) The miner returns any shares found.

The problem is that the nonce is only 32 bit number.  So there are only 4 billion hashes per nonce range.  A 1 GH miner will need a new getwork every 4 seconds.  A 10 Gh miner will need a new get work every 0.4 seconds.  If only Satoshi had made the nonce range 64bit.

You could make the difficulty 1.25 million (solo mining) and 1 GH miner would still need 1 getwork every 4 seconds.

Still ntime rolling can be used to significantly reduce the number of getworks on fast miners.  If the pool allows an ntime rolling of 5 seconds then no matter how fast a GPU becomes it only needs a new getwork every 5 seconds.  When a GPU finishes a nonce range it simply increments the timestamp and hashes it again.  When the ntime rolling expires it gets new work.

https://en.bitcoin.it/wiki/Getwork#rollntime

Maybe a pool op can correct me but there is no reason one couldn't use LP combined with a very long ntime rolling (say 60 sec) to reduces number of getworks to 1 every minute per GPU/CPU, regardless of how vast they are.

legendary
Activity: 1750
Merit: 1007
Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.
This is what I was hoping the variable diff could fix, but from what you are saying it might not make any difference at all. I don't really understand why not, since it appears to me that it would result in fewer getwork requests. But perhaps I am missing something obvious?

The idea was that someone with either a) a smart client that split the nonce range, or b) an extremely fast single miner (rig box? Tongue) would be able to adjust such a setting himself, possibly increasing efficiency for high speed devices on flaky network connections, because of less time getting work on said flaky connection.

I dunno, I tend to come up with lots of ideas that sound good on the surface, but end up not being practical for one reason or another.

P.S., to combat a botnet, could you force-feed them shares with diff=999999999999999? And disable LP? Grin

A getwork request itself is unrelated to difficulty.  It is a blob of data your miner needs to construct a valid hash for the pool.  This means that hashing higher difficulty shares will not change the getwork load.  It simply means your miner won't submit work to the server unless hashing that getwork produces difficulty of X or greater.  Since a single getwork can produce multiple shares of difficulty X, your miner will continue hashing with that getwork request until it exhausts the entire nonce range (or at least MOST mining software will).
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.
This is what I was hoping the variable diff could fix, but from what you are saying it might not make any difference at all. I don't really understand why not, since it appears to me that it would result in fewer getwork requests. But perhaps I am missing something obvious?

The idea was that someone with either a) a smart client that split the nonce range, or b) an extremely fast single miner (rig box? Tongue) would be able to adjust such a setting himself, possibly increasing efficiency for high speed devices on flaky network connections, because of less time getting work on said flaky connection.

I dunno, I tend to come up with lots of ideas that sound good on the surface, but end up not being practical for one reason or another.

P.S., to combat a botnet, could you force-feed them shares with diff=999999999999999? And disable LP? Grin
legendary
Activity: 1260
Merit: 1000
Well... ok.  Are you seeing problems with the amount of submitted shares vs the amount of getworks?  The load submitted shares put on the server is pretty minimal for my servers, it's the getwork requests that clog the bandwidth.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
Err, did you read the whole thread, DrHaribo?

Yes. And I think it's obvious >1 difficulty can reduce the number of requests delivering proofs of work from fast miners. To reduce the number of requests fetching new work for fast miners you can use X-Roll-NTime.

Dealing with slow workers (CPU miners) is much harder. Perhaps noncerange could help, but I suspect it won't do that much. The noncerange extension is also just supported by 1 miner that noone is using.
legendary
Activity: 1260
Merit: 1000
Err, did you read the whole thread, DrHaribo?
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
This is certainly a good idea. I think the reason it has only been talked about in the past and never actually done is lack of support in software. I think most pool software always send a difficulty 1 target, and some/most miners ignore the target and pretend it is always difficulty 1?

Perhaps we can add a new column "variable difficulty" to this wiki page https://en.bitcoin.it/wiki/Getwork_support

Does anyone know which miners and pool programs support this?
member
Activity: 84
Merit: 11
We've solved this issue at BitPenny (website) by providing an open-source client and setting the difficulty to 8.  This keeps the number of submitted shares manageable while allowing users to get latency-free work locally as often as they wish, even with an array of fast GPUs.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
I understand, thanks for the responses. The main reason I considered this was for overclocked 7970s and such fast things. I think the benefit might only become more apparent when a single device is clocking over 1 ghash (doesn't happen yet), which may not be too far down the road given various bits of technology being worked on. Another way such a scheme could be utilized is if cgminer would split nonce ranges across all the devices that it controls - I hear every now and then pool operators moaning about cgminer being a getwork hog.
legendary
Activity: 1260
Merit: 1000
Yeah database load is not an issue. 

That's right... I remember now why increasing the difficulty doesn't help, it just reduces the amount of shares returned.  It's the outbound that's the bottleneck.

legendary
Activity: 1750
Merit: 1007
A few problems with varying difficulty:

The load only drops on the pool side for verifying shares (lower difficulty = fewer shares are returned).  The rate a client ask for shares will be the same since many miners now exhaust a full getwork before asking for more, whereas a few months ago they would submit a share and use new work even though its possible to hash multiple valid shares from one getwork.

This means that the pool is still doing just as much work software side for asks, which is where most the load on a pool server comes from (verifying shares is VERY easy/low load).  The outbound traffic is also unaffected for the same reason.

One issue with implementing it is that some miners submit all diff=1 hashes regardless of the difficulty target (I believe cgminer still does this, not sure on which others).

The only real advantage of a higher difficulty is the lower load on the database since valid shares will be submitted less often (and reduced size of the database).  I'm not sure about other pools, but I know BTC Guild hasn't had any issue with database load when it comes to logging shares in the last few months.


As far as I'm aware, all miners queue at least 1 getwork beyond what is being worked on actively.  Some cache more, or can be configured to do so (phoenix and cgminer I know have this).  This can help you if there is an unstable network connection between you and the pool.  But since most miners exhaust a full getwork nonce range before moving on to a different getwork, you would have to be running an EXTREMELY fast miner (or have some kind of cluster setup where multiple cards are splitting the nonce range).

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Hmm... configurable.  Not a bad idea...

I've been toying with the idea of increasing the difficulty to lessen the load, though I think this has been tried in the past and it didn't work out so well, but I can't remember why.

I even thought of another idea, perhaps the pool could automatically vary the difficulty based on ask rate (obviously this would be an option that is disabled by default) or some other statistic.
legendary
Activity: 1260
Merit: 1000
Hmm... configurable.  Not a bad idea...

I've been toying with the idea of increasing the difficulty to lessen the load, though I think this has been tried in the past and it didn't work out so well, but I can't remember why.
Pages:
Jump to: