Finished a major overhaul of the getwork module to combat all the outages the pool has been experiencing. Getworks are now completely de-threaded and parallelized: the queue gets filled continuously by all the pools, in order of preference (no getworks are wasted, but more are obtained from higher-preference pools whenever possible). The quality-of-service checks have been changed accordingly so that Multipool should no longer manage to ban all the pools it mines from for insufficient responsiveness. As a final fallback, the pool now generates its own work a-la solo mining. Haven't yet decided which reward system to use for the solo shares
.
All this effort to manage workloads has unfortunately detracted me from the more profitable pool additions - more pool scrapping. As it is, Multipool is being squeezed tight by being forced to avoid zealous automatic DoS defenses, and is really only mining at full strength from half of the pools in the current rotation. More pools in rotation will certainly help things move along. Efficiency has been somewhat lacking lately in comparison to what could be achieved.
You should target Continuum pool if all other pools are dry. It always has 100% efficiency like solo, without all the variance. It will also help reduce the variance of those who mine on it normally, further promoting fair scoring methods.
Continuum pool would definitely be a great addition!
Does the pool also implement the Lie-in-Wait attack, discussed for example
here?
That's even too devious for my tastes
. I suspect though that the window of opportunity is narrower than one might think. The shares go stale pretty quickly, I wouldn't want to hold on to one for longer than a minute.
Sure enough, that eligius-eu round ended 40 mins after it started, giving a 4.498 efficiency. I only had 36 shares sent into that eligius round, while in the same time span I sent 98 to btcmine. I'm not sure that this works as good as it is supposed to. Or am I getting it totally wrong?
No, you are right. The reason is that Multipool has been constrained by the rate of getwork requests it can wrest from the pools. At times, some of the pools are under a lot of load and have latencies above 0.5s, ten times the normal rate. With the new parallelized getwork module, the pool should now be able to grab all the shares possible.
As for the fees, honestly, they should really be calculated based off the total efficiency, not for individual rounds. I don't want to just skim off natural variance. The difficulty with that was that since total efficiency varies, the total collected fee could go up or
down at any time, and I didn't want to deal with that. Once I have the time, I'll whip up a better fee calculator.