For my Sanctuary, I added this cron job to stop and reindex every 3 hours
crontab -e
0 */3 * * * /home/biblepay/src/biblepay-cli stop; /home/biblepay/src/biblepayd -reindex > /dev/null 2>&1
And I enabled shrinkdebuglog in the config:
cd ~/.biblepaycore && vi biblepay.conf
shrinkdebuglog=1
Not sure if reindexing every few hours is the best strategy, but I think it will work, is it bad to be reindexing all the time?,
I also feel kind of bad limiting the log file, because now any bugs in the log may get erased,
curious what you guys think and what strategys you guys have to make sure your sanctuary is always running
Sounds like overkill for something that happens once every few months.
Its not that reindexing is bad for the disk, its something that you frankly should never need to do, unless you were forked after a mandatory.
In my case I have a reboot.sh file that reboots all my sancs and an upgrade.sh (that does the git pull origin master && make). I just run the script once every few months
if more than 10% of my sancs die. In general I only lose one node every 30 days or so, so that has worked for me and in my case I dont see a need to change it. I dont even restart the single nodes, they keep running without a monitor script. Im using the $5 vultr ubuntu 64 bit.
Maybe you are reacting to the high logging issue once per month and treating it as a regular occurrence. On those days, I kill my debug.log while the node is hot and it keeps running without writing to the debug.log.
EDIT: BTW, I run the script from my first sanc, and it SSH's the commands to the other nodes (I dont log in to my other sancs at all for upgrades or for reboots).
Woah your setup is awesome! That is really cool!
I am such a newbie in comparison,
I am manually connecting to each sanctuary with putty, with username/password,
and I type or copy paste commands
I was unlucky 1-2 days ago, my sanctuaries went EXPIRED,
the daemons were running, but they were all stuck at the same block,
and the latest block was further ahead,
Sounded like others experienced it too, not sure what happened.
But yeah typically some of my sanctuaries have issues every now and then.
I like your SSH scripting, I am going to try and use that now!