Author

Topic: can pools actually handle the increased shares from ASICs? (Read 1280 times)

sr. member
Activity: 420
Merit: 250
But, if the people above me are referring to an automatic difficulty adjustment for the shares themselves within a pool, I would assume you can just set it at a target of 1000 shares per minute or something and it will 10x the difficulty if it has to to keep the shares lower.
right now there are some pools getting ready for stratum (which is basically a better way than getwork).

There are also a some pools where you can adjust the difficulty manually.

sr. member
Activity: 392
Merit: 250
dunno what half the technology you're talking about is, probably cuz I don't run a pool Tongue but it sounds like you're saying the pools have their own independent difficulty adjuster to keep the share rate pretty static and the only problem is a new pool starting at 1 instead of like 100,000.  Sounds stable enough.  Do all pools use such load balancing measures?

Btw yeah, I would assume all pools have to verify a submitted share so it can accept it or reject it as a correct answer.  So if they send out a block calculation with say 1000x easier difficulty than the real block but with the same base data, once someone's mining client does 10 billion calculations and finds a sufficiently low hash, the pool server won't just take their word for it.  It has to re-run that 1 single hash to verify that the result is low enough.  So if my GPU does 10 billion hashes in like 1 minute or whatever and then I just have to have the server verify the one that I said was correct.  A Pentium 2 could do that in 1 ms Tongue but the problem is, what if it's 1 server and there's a a million shares coming in at a time?  Then you're back up to somewhat big numbers.  1 MH/s = 1 million verified shares and any server chip can do that but this isn't a bitstream.  It's 1 million individual requests coming over the internet that the NIC has to translate, load into memory, etc so you're not quite going to see the same performance as a desktop running the operation solo in a repetitive fashion.

But, if the people above me are referring to an automatic difficulty adjustment for the shares themselves within a pool, I would assume you can just set it at a target of 1000 shares per minute or something and it will 10x the difficulty if it has to to keep the shares lower.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Going to stratum with variable difficulty on pools will use less bandwidth and cpu for the pool, no matter what the hashrate is, compared to current hardware and protocols. In fact, stratum with a properly tuned variable diff server, the amount of traffic is static regardless of hashrate. The only real issue is what happens at the start of mining as currently they start at diff 1 and then increase. I doubt that will be the case long term as the software will deliver expected hashrate and/or desired start difficulty.
hero member
Activity: 518
Merit: 500
Manateeeeeeees
DrHaribo at Bitminter is implementing stratum, and I have a feeling before ASIC hits it will be done.  There will be some rather large players (fefox, avidreader, gigavps) using the pool (unless they move or go solo), so it will have to stand up to at least 8-10TH almost immediately.
legendary
Activity: 952
Merit: 1000
IIRC, bot the stratum and GBT protocols were designed to eliminate a pools bandwidth for having to deal with 100x the number of getworks and submitted shares.

As far as actually verifying a submitted share, IDK. Are you saying a pool rehashes every submitted share, cuz that doesnt sound right... Huh
sr. member
Activity: 392
Merit: 250
It'd suck and only be slightly funny if half the pools out there went offline after ASICs launched because everyone collectively sent 10x the shares all of a sudden Tongue The way I understand it, a pool sends a proof of work operation the same as the block but with a much lower difficulty.  The client solves it and submits the "answer" and the server re-runs that 1 calculation to make sure the answer is correct then accepts it.  If it happened to also have been a low enough hash value for the block itself, it submits it and the pool gets 50BTC.  Yay!

Except what's the CPU and RAM usage going to look like at 10x the shares?  I assume if my i5 can do like 12MH/s then doing 1 verification hash calculation wouldn't take real long but some pools get many thousands of shares in a minute and I'm more worried about the overhead and network traffic.  Are the pools in any trouble if they're running on crappy servers or is the client so lightweight, nobody needs to be concerned?

Or is there some manual or automatic adjustment to share difficulty that will make shares themselves harder to solve and keep the communication and verification from getting too intense?
Jump to: