Author

Topic: Difficulty > 1 share adoption suggestion for pool operators. (Read 3735 times)

legendary
Activity: 2576
Merit: 1186
This increase in difficulty seems interesting to me. I have a line of thought that may lead to a solutions that vastly reduces the network and processing requirements of the pool servers. I'll try to lay out my line of thought. I'm sure that I have holes and/or inaccuracies in this, so please try not to dismiss it because of a trivial problem.

As I understand it, the standard way that pooled mining works is this: the pool server looks at the present block chain, pending transactions, and a generation transaction that sends 50 coins tothat the server, and puts together the input to the bitcoin hash function. Then, miners request ranges of nonces to check. The miners compute those hashes, and respond with the ones that have a hash below a certain value (much larger than whatever it takes to mine a block). The server then looks at the shares claimed and easily computes the few hashes for the claimed shares to give credit for working. The pool knows the miner is working honestly because the shares check out ok.

What I propose is that the pool gives each miner an address to send the 50 coins to. Then, each miner makes the input to the hash function be the block chain, existing transactions, a generation block sending 50 to the MINER, and a transaction sending 50 to the pool. The pool can still check that the block is valid, and only gives credit to the miner if it is playing by those rules. With this setup, each miner has a different generation transaction, so there is no reason to partition off nonces through a centralized server. It may cause larger network demands since the full block must be communicated, not just the nonce, but I feel like it could also reduce stale shares. It also requires that the miner hassince access to the full block chain, which not all do.

Let me know what you all think. Sorry if it's a bit wordy.
BIP 0022
member
Activity: 75
Merit: 10
This increase in difficulty seems interesting to me. I have a line of thought that may lead to a solutions that vastly reduces the network and processing requirements of the pool servers. I'll try to lay out my line of thought. I'm sure that I have holes and/or inaccuracies in this, so please try not to dismiss it because of a trivial problem.

As I understand it, the standard way that pooled mining works is this: the pool server looks at the present block chain, pending transactions, and a generation transaction that sends 50 coins tothat the server, and puts together the input to the bitcoin hash function. Then, miners request ranges of nonces to check. The miners compute those hashes, and respond with the ones that have a hash below a certain value (much larger than whatever it takes to mine a block). The server then looks at the shares claimed and easily computes the few hashes for the claimed shares to give credit for working. The pool knows the miner is working honestly because the shares check out ok.

What I propose is that the pool gives each miner an address to send the 50 coins to. Then, each miner makes the input to the hash function be the block chain, existing transactions, a generation block sending 50 to the MINER, and a transaction sending 50 to the pool. The pool can still check that the block is valid, and only gives credit to the miner if it is playing by those rules. With this setup, each miner has a different generation transaction, so there is no reason to partition off nonces through a centralized server. It may cause larger network demands since the full block must be communicated, not just the nonce, but I feel like it could also reduce stale shares. It also requires that the miner hassince access to the full block chain, which not all do.

Let me know what you all think. Sorry if it's a bit wordy.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?
With load balancing miners, the pool might detect a much slower hashrate than work is being requested for. Also, not all pools (if any) have a means for the high-level statistics like hashrate to be communicated back to the core poolserver.
But of course all pools do.
Quite simply, the overall share submission rate.
Since this is a rather large statistical sample, it would also be way more accurate than the numbers provided by the miner software.

The miner program attempting to get the correct value is prone to all sorts of issues:
How long can the pool assume the miner is mining at the rate specified - that is specifically a guess.
What percentage of the hash rate is the miner providing to the pool - that is specifically a guess.
How accurate is the miner in determining the hash rate - another guess.
Overall - inaccurate.

How accurate is the share submission rate - 100%
How accurate is converting that to a Hash rate for anything but a tiny pool - extremely.
Even for a single miner, over a period of a day, converting 'U:' is very accurate.
legendary
Activity: 2576
Merit: 1186
Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?
With load balancing miners, the pool might detect a much slower hashrate than work is being requested for. Also, not all pools (if any) have a means for the high-level statistics like hashrate to be communicated back to the core poolserver.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Would be nice if more miners implemented at least the X-Mining-Hashrate header...
Is that even required? If the pool detects a high hashrate itself, wouldn't that suffice?
legendary
Activity: 2576
Merit: 1186
Would be nice if more miners implemented at least the X-Mining-Hashrate header...
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Well I just committed a busload of changes to cgminer to help further reduce getwork load.
legendary
Activity: 1750
Merit: 1007
Processing a proof of work server-side is just like mining, isn't it? In regards to the hashing operation that takes place - it just processes specified nonces and not random ones. I haven't really thought about it until now, but for a 1Thps pool, about how many nonces does it need to validate per second? I can see how a slow CPU would be a bottleneck, and I wonder if pool software could offload that processing to a video card - otherwise faster devices will trounce 'most any pool.

I haven't run the numbers, but if we assume that a dedicated server might be able to process perhaps 10mhps in a single thread (and that might be a bit high), is that enough to deal with (perhaps several) 1Thps miners flooding it with nonces to process?

Yes.  Because a difficulty=1 share is approximately 2^32 hashes.  Pool software likely isn't using anywhere near the optimizations that mining software uses to evaluate hashes, so lets say that it can only verify ~10 KH/s worth of hashes.  That means it can verify 10,000 shares per second.  This is a very rough number, since no pool has ever had to verify even close to that.

BTC Guild currently processes ~380,000 shares in 20 minutes, or 316 shares per second.  If my 10,000/second number is accurate, that means BTC Guild could support ~31x the hash power it currently has, which is about 35 TH/s.

Now in this scenario, I'm starting to think sending out higher difficulty shares would be beneficial.  The likely bottleneck would be DB access time (inserting 100k rows every 10 seconds in batches with my current software setup).

Historically, the biggest problems BTC Guild has had is pushing out longpoll connections quickly.  When you starting having 4,000+ longpoll connections, it starts to bog things down on the server.  Normally the bulk of those connections are to inefficient miners, so its very difficult to say just where the bottleneck will lie when dealing with highly efficient and very fast mining hardware.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Processing a proof of work server-side is just like mining, isn't it? In regards to the hashing operation that takes place - it just processes specified nonces and not random ones. I haven't really thought about it until now, but for a 1Thps pool, about how many nonces does it need to validate per second? I can see how a slow CPU would be a bottleneck, and I wonder if pool software could offload that processing to a video card - otherwise faster devices will trounce 'most any pool.

I haven't run the numbers, but if we assume that a dedicated server might be able to process perhaps 10mhps in a single thread (and that might be a bit high), is that enough to deal with (perhaps several) 1Thps miners flooding it with nonces to process?
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul.  Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.

That already exists. It's called rollntime. Sadly miner support isn't very good. To make good use of rollntime the miner should 1: make the best use of the roll range it is given, and 2: never roll further than the server allows ("expire" support).

In my pool I whitelist miners with proper rollntime support. Others don't get rollable work. Currently only DiabloMiner and my own miner (although I'm only now working on actual support in the miner). I will have a look at MPBM and possibly add that. I hope other miners will improve support and I'll whitelist them as they come out. I think ASIC mining without this could be a very bad idea.

Does proof of work on all those shares coming in require that little cpu? I've never run a pool so maybe I'm barking up the wrong tree entirely.

If someone gets (through rollntime) 100 work units (nonce ranges) from my pool in 1 request and then send in 100 proofs of work then processing the proofs of work is what's causing server load. Many seem to believe that processing proofs of work is free, but I think we'll see this proven wrong in october.  Cheesy

Indeed. There's nothing like a change to drive development is there?

Yeah. The bitcoin world is in constant change, and it's always do or die for developers. Wink
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Just to repeat what was stated in the last thread about this:

Changing the difficulty does not change the frequency your miner will request work.  It will only reduce the frequency you send work back.  For pools, the difference in load here is minimal, it's much harder to send you work than it is to verify work that was sent back.


If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul.  Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.
Does proof of work on all those shares coming in require that little cpu? I've never run a pool so maybe I'm barking up the wrong tree entirely.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Miner support for >1 difficulty isn't looking so good: https://en.bitcoin.it/wiki/Getwork_support

But maybe it just needs updating?
Indeed. There's nothing like a change to drive development is there?

edit: I updated the entries for cgminer and bfgminer since they support it
legendary
Activity: 1750
Merit: 1007
Just to repeat what was stated in the last thread about this:

Changing the difficulty does not change the frequency your miner will request work.  It will only reduce the frequency you send work back.  For pools, the difference in load here is minimal, it's much harder to send you work than it is to verify work that was sent back.


If the ASICs are real (I still highly doubt BFL will produce anything remotely close to their claims), the entire mining protocol will need an overhaul.  Clients will either have to generate work locally and send it to the pool (similar to p2pool), or a method of getwork where a miner can get a packet of many getworks at once in a condensed format.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
Miner support for >1 difficulty isn't looking so good: https://en.bitcoin.it/wiki/Getwork_support

But maybe it just needs updating?
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Gentlemen, I salute thee o\
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Increasing D for pooled share submissions would increase variance for miners.

The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.


If true then the automatic scaling should be logarithmic rather than linear.

The expected number of hashes to solve a D 1 block is 2^48 / as.numeric(0xffff) or approximately 2^32. Each hash has the same probability to solve a D 1 block and create a share, so the probability distribution is geometric. If p = 1/D, the variance for the geometric distribution is:

Code:
(1-p)/p^2

which approaches D^2 as D increases. In the case of D 1, the variance is 1.844731e+19. For D 10, it's 1.844731e+21. The ratio of (D 1)/(D 10) ~ 100.

... and then Meni got there first. What he said about square roots rather than logs.


This bit was all wrong. The CDF of a share passing a particular difficulty is derived from 1/(uniform distribution CDF).
donator
Activity: 2058
Merit: 1054
Increasing D for pooled share submissions would increase variance for miners.

The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.
If true then the automatic scaling should be logarithmic rather than linear.
If anything it should be square-root, definitely not logarithmic.

If we assume the miner has a target function with linear factors for expectation and variance, his optimal difficulty doesn't depend on his hashrate, only on his net worth. If we assume his net worth is proportional to his hashrate, then the optimal difficulty grows by the square root of the hashrate.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Increasing D for pooled share submissions would increase variance for miners.

The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.


If true then the automatic scaling should be logarithmic rather than linear.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective.

Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are.

And, the share difficulty should have absolutely no effect on the stale rate.
Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ......
So no.

mathfail
Yep - OK he said % - I was thinking number of shares Tongue
I guess my wording was really bad Smiley

Edit: as long as the number of stales drops by the same ratio as difficulty increases then it will be ok Smiley
My reason for bringing this up is simply that different devices have different characteristics so it will be interesting to see if the differences come into play with LP's and higher difficulty.

Edit2: cgminer reports numbers not % (and I don't look at the pool %) so I guess that's why I messed up.
sr. member
Activity: 270
Merit: 250
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective.

Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are.

And, the share difficulty should have absolutely no effect on the stale rate.
Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ......
So no.

mathfail
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective.

Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are.

And, the share difficulty should have absolutely no effect on the stale rate.
Except that with higher difficulty it takes longer to find a share thus the amount of work lost due to a stale increases ......
So no.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
Increasing D for pooled share submissions would increase variance for miners.

The variance in time between share submissions at a constant hashrate will increase by the square of the ratio of the greater difficulty to the lesser one. Increase D from 1 to 10 and the variance of the time in between share submissions increases one hundred fold.

donator
Activity: 2058
Merit: 1054
Perhaps this is a good opportunity to review sections 7.5 and 7.6 of AoBPMRS. Having different difficulty shares for different miners is fairly straightforward from the reward method perspective.

Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Stale rate is measured as a percentage, and as long as the percentage remains the same it doesn't matter what difficulty the share are.

And, the share difficulty should have absolutely no effect on the stale rate.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
It's also worth mentioning this would make things very interesting for the proxy pools out there if they start passing work to pools set up for higher difficulty shares  Wink
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
Indeed, back to that chance debate I've seen many times before in different forms. Whether the random nature of shares scattered about and when they fall relative to block changes will even out compared to many many small shares. Mathematically to me it would seem to be exactly the same as current shares based on chance: It will fluctuate visibly more but even out long term to be identical.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Will be interesting to see the affect on stale shares - since each stale is worth n times what they were before.
e.g if your difficulty goes up 10 times but your stale rate doesn't go down 10 times, you lose by having a higher difficulty.
vip
Activity: 980
Merit: 1001
We have been discussing higher diff shares going forward
Initially we will offer miners a choice of 2 difficulties.
Over time we will code up some way to do dynamic difficulty as suggested and ensure payouts work correctly with dynamic share difficulties
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
With the upcoming HUGE hashing hardware starting to hit, now would be a good time to consider supporting higher than 1 difficulty shares for bigger miners which would allow pools to scale without as much increase in bandwidth and server resource requirements as the increase in hashrates. I'd suggest initially making an optional difficulty multiplier switch for workers on the website, which would scale with the miners' hashrate. Enabling it by default would surprise and confuse many miners, and also some mining software may not support it so they'll just get high rejects unexpectedly. As a rough guess, I'd recommend increasing difficulty by 1 for every 1GH of hashing power. This will not dramatically change getwork rates, but it would change share submission rate and processing of them which is bandwidth and CPU intensive. There would be issues with fluctuating hashrates and difficulty targets when precisely on the 1GH boundaries, and this could be worked around by the user setting their own target or by using some hysteresis for the change up and down of targets to avoid frequently flicking between difficulties.
Jump to: