BTC Guid is PPLNS. If i mined their for as many days as I've mined here, those 10 shifts would have been full. They would have rolled down maybe 4 shifts on THEIR pool and I still would have gotten paid for 6 other shifts below it. Also, the length of the shifts get longer as fewer people mine on their pool. The shifts get shorter as more people mine on their pool.
I see no shifts here. Please explain.
PPLNS in its purest form is a specific number of shares. p2pool for example uses share based PPLNS where a specific number of shares are paid for each block. When a new share is submitted on p2pool, the oldest share is pushed off. Shifts are used to dramatically reduce the amount of data you need to store when using a very large N value. p2pool gets around it by adjusting the difficulty needed to be part of the sharechain.
Would love to hear from kano exactly how he manages the accounting of a block payout with a 30 billion entry list of shares. I would assume in the background there's some kind of grouping of shares every minute [which in effect would be shifts, just not visible as such on the front end], rather than a list of 30 billion entries that is constantly being churned.
EDIT: It wouldn't actually be 30 billion entries since there's no need to store [for example] a diff=128 submission 128 times. But it would still be a huge amount of churn.
EDIT2: At least for your comparison, BTC Guild uses shifts to reduce database churn (no need to store a huge list of shares using FIFO), and also have clean auditing of how any specific block was paid that can be audited by users.
Internally ckpool has workinfos - each with a workinfoid.
A workinfo is the template received from bitcoind.
ckpool gets a new template every 30s and of course each network block change.
So most workinfos last 30s.
This effectively produces a shift of 30s - since a payout includes the full workinfo at the start and finish - i.e. a small % more than the N used.
In ckdb, I keep share summaries, In the postgresql DB (and in RAM), at this first level.
A share summary is per worker, per workinfoid.
A day's worth at the moment is somewhere around 2.5million
I've let it run like that for quite a few months on another small pool and this one.
However, the original database design includes the next level above that, that I called WorkMarkers.
Basically, a Marker that gives a begin+end range of workinfoids - more 'like' a shift.
That original design was to summarise share summaries such that there were only as many WorkMarkers as needed for payouts.
i.e. each block (finish) workinfoid and each payout (start) workinfoid would decide the markers.
This divides the number of records by about ... 1500 - i.e. less than 2000 a day - with a worker load of about 2000 - on a pool that finds a block around each day or two.
I've still not completely implemented it - it's a manual step at the moment creating the WorkMarkers - though it will be finished very soon - in a slightly different design:
However, due to the fact that I can use up 40GB of RAM without too much trouble, and no doubt in some expected future the pool will have some blocks in the 800% region, it's too long to wait for a block or a payout before doing a summarisation.
So, instead I've decided to extend the WorkMarker usage to also do shifts - still not decided yet, but either 50 or 100 workinfoids per shift.
i.e. ~25minutes or ~50minutes
Thus the daily storage would be around 100k or 50k records, independent of actual pool hash rate, but of course dependent upon the worker count.
If the pool starts to outgrow these sizes, I'll do one or both of two things:
Increase the shift size and/or summarise up to user level.
No doubt some time in the future this too will end up being too much data, however, of course, I don't need the full history forever stored in RAM, and probably not in the DB either, and solve that with archiving.
... and having just written all this, I realised I had written a lot of that before already
https://bitcointalksearch.org/topic/m.9417792