Payout 391442 sent
7ccc651f79b018634b04831f9a0e46bf23ef67dc4634f080caa6f29e9f824cb0
and confirmed
--
Restart itself takes just seconds but ckdb reloads everything and processing all data for a few minutes after start, before data will be available on website for users.
Ahh i see. So the 20 minutes Kano talks about is the time it took for the system to get back up to speed as far as the web front end.. But all the while its processing shares regardless of how the front end looks question marks and all.. Thanks again for clearing that up.. im going to attempt an install of his open source pool software for my own solo pool. I currently run mpos with node.js for stratum..
Best Regards
d57heinz
tl;dr; follows
This is commented in various places in ckdb.c
The postgresql database is a permanent store of everything, however share information is only updated after the end of each shift.
ckdb doesn't read postgresql at any time except during a restart, since ckdb is "the database"
ckpool sends everything to ckdb (of course) but also logs everything it does to hourly log files
There's also a guaranteed unique set of 2 sequence numbers and 2 id numbers on every message, that are verified and checked for missing numbers ... which it reports 'very loudly' if that happens and there's a ckdb command to check the status of that at any time also.
When I stop ckdb, it simply closes down and exits (which actually takes a few seconds, 6s this last time)
ckpool then starts complaining about "Where's ckdb?!"
Restarting ckdb opens it's connection to ckpool and starts queueing all incoming messages, (ckpool is happy again) then reloads the whole contents of the database into ram (which took in this last case 6m 34.569s to load everything)
Next it then redoes what it did from that reloaded db point onwards by finding the appropriate ckpool log file and reading from that one forward to the end of the log files (that are of course also growing)
During this process it also truncates the queued data if it starts to overlap.
Once it's finished the log file reload (took 13m 19.957s to process almost 3hrs of log files), it then processes the queue.
This is one bit I need to look more into coz the first file took the expected 3.5min but the 2nd file blew out to 7.5min and the 3rd 33% full file took 2.5min (i.e. same ratio as the 2nd one)
The queue took 0m 32.041s to get back down to zero and then all the restart was complete.
This process simply puts ckdb into the state it should be 'now'
One point in this that I will need to be more diligent about when I do restarts - is to time them to just after the shift database update.
It looks like it was just before a shift end that I restarted this time so it had to reload a full extra file (the first 3.5min) and that seems to have caused the slower reload of the 2nd and 3rd files (it's usually only 2 files)
The shift end point is somewhat variable and it's also another 13-15min after that, that it updates the database.
So making sure it is just after a db update will of course mean the reload log file start point will be a full shift further forward.