Hey fullzero, i have a question,
without a doubt my biggest problem right now is that when my miner crashes it takes the whole rig down with it, everything gets stuck, SSH barely works, average system load jumps to 14.5!! and Xorg takes up 100% of the CPU, its so bad that none of the standard reboot commands work, they just do nothing, the only thing that actually reboots the rig in this state is "echo b > /proc/sysrq-trigger" so i've set up a script that checks the average system load and if its over 2 it uses the command to reboot, and it works, but i dont like this "solution", yesterday after a reboot nvOC got corrupted somehow, lost my customized oneBash and the whole system became read-only (thankfully i had a oneBash backup that was only a few days behind).
so the question is, what can i do to relive this Xorg error, i run a 7 card rig and never plan on going for a higher number, what can i do with Xorg that would fix this?
Thanks.
@ tempgoga
It seems that whenever a soft crash occurs most of the cards drop to zero, so while the display/keyboard is unresponsive you can catch the soft crash from nvidia-smi. The script below checks card utilization, if it drops below 90% it counts down a minute and if mining hasn't resumed it reboots the system.
This seems to have worked at least once in my case (only got one soft crash this weekend) and the system recovered as expected.
the threshold values work for my setup but others may find different values optimal
Also if anyone knows a way to iterate the if && statements we can get the card count from "cards=$(nvidia-smi -L | wc -l); echo $cards" but the way below also works with manual editing to adjust the watchdog for the number of cards in you individual system.
___________
#!/bin/bash
#m1
threshold=90
while sleep 5
do number=$(nvidia-smi |grep % |awk '{print $13}' |tr -d %)
set -- $number
echo -e "$@"
# The "if and" statements below need to be manually adjusted to match the number of cards in your system
# If you have 5 cards, leave is as, if a different number of cards remove or add the && statements as needed as in the example below
if [[ "$1" -gt "$threshold" ]] && \
[[ "$2" -gt "$threshold" ]] && \
[[ "$3" -gt "$threshold" ]] && \
[[ "$4" -gt "$threshold" ]] && \
[[ "$5" -gt "$threshold" ]]
# && \
# [[ "$6" -gt "$threshold" ]]
then i=12
echo OK
else echo $((i--))
fi
if [ $i -le 0 ]
then echo $(date) REBOOT due to soft crash >>~/watchdog.log
sleep -5
sudo shutdown now -r
fi
done
___________
Hey thats funny I just made a script doing something similar, although it checks the powerdraw.
Here it is:
#!/bin/bash
# Miner restart script V001
# By Maxximus007
# for nvOC by fullzero
#
# POWERLIMIT MUST BE SET IN oneBash
#########################
### BELOW CODE, NO NEED FOR EDITING
#########################
echo "$(date) - Starting miner restart script." | tee -a ${LOG_FILE}
# Creating a log file to record restarts
LOG_FILE="/home/m1/restartlog.txt"
if [ ! -e "$LOG_FILE" ] ; then
touch "$LOG_FILE"
fi
while true
do
sleep 60
GPUS=$(nvidia-smi --query-gpu=count --format=csv,noheader,nounits | tail -1)
gpu=0
COUNT_LOW_POWER=0
while [ $gpu -lt $GPUS ]
do
{ IFS=', ' read POWERDRAW POWERLIMIT; } < <( nvidia-smi -i $gpu --query-gpu=power.draw,power.limit --format=csv,noheader,nounits)
let POWER_DIFF=$( printf "%.0f" $POWERLIMIT )-$( printf "%.0f" $POWERDRAW )
# If current draw is 30 Watt lower than the limit count them:
if [ "$POWER_DIFF" -gt "30" ]
then
let COUNT_LOW_POWER=COUNT_LOW_POWER+1
fi
let gpu=gpu+1
done
if [ $COUNT_LOW_POWER -eq $GPUS ]
then
echo "$(date) - Power draw is too low: kill miner and oneBash" | tee -a ${LOG_FILE}
# If miner runs in screen 'miner' kill the screen
screen -X -S miner kill
# Best to restart oneBash - settings might be adjusted already
kill ps -ef | awk '$NF~"oneBash" {print $2}'
else
echo "$(date) - All good! Will check again in 60 seconds"
fi
done
You can combine the above with your code, and find the utilization like this:
nvidia-smi -i 1 --query-gpu=utilization.gpu --format=csv,noheader,nounits
You have to iterate the GPU, starting at 0 to get them all
Okay I've combined the two, perhaps this will work for most of us:
#!/bin/bash
# Miner restart script V002
# By Maxximus007 && IAmNotAJeep
# for nvOC by fullzero
#
#########################
### BELOW CODE, NO NEED FOR EDITING
#########################
echo "$(date) - Starting miner restart script." | tee -a ${LOG_FILE}
# Creating a log file to record restarts
LOG_FILE="/home/m1/restartlog.txt"
if [ ! -e "$LOG_FILE" ] ; then
touch "$LOG_FILE"
fi
MIN_UTIL=90
RESTART=0
while true
do
sleep 60
GPUS=$(nvidia-smi --query-gpu=count --format=csv,noheader,nounits | tail -1)
gpu=0
COUNT=0
while [ $gpu -lt $GPUS ]
do
{ IFS=', ' read UTIL; } < <( nvidia-smi -i $gpu --query-gpu=utilization.gpu --format=csv,noheader,nounits)
let UTILIZATION=$( printf "%.0f" $UTIL )
# If current utilizations lower than the limit count them:
if [ $UTILIZATION -lt $MIN_UTIL ]
then
let COUNT=COUNT+1
fi
let gpu=gpu+1
done
if [ $COUNT -eq $GPUS ]
then
if [ $RESTART -gt 1 ]
then
echo "$(date) - Utilization is too low: reviving did not work so restarting system" | tee -a ${LOG_FILE}
sudo shutdown now -r
fi
echo "$(date) - Utilization is too low: kill miner and oneBash" | tee -a ${LOG_FILE}
# If miner runs in screen 'miner' kill the screen
screen -X -S miner kill
# Best to restart oneBash - settings might be adjusted already
kill ps -ef | awk '$NF~"oneBash" {print $2}'
let RESTART=RESTART+1
else
echo "$(date) - All good! Will check again in 60 seconds"
fi
done
Pretty cool! I'll try it tonight, lets hope this put the softcrash issues behind us.
I will try this out as well; good work.
@ Maxximus007
Thanks for putting these together, great collab!
I'm not a bash expert, so maybe I'm reading this wrong, but here are some thoughts.
The combined code seems to be evaluating each gpu individually for the fault condition to be met, which means if one fails and you have say 5 other cards working then it keeps going until all the cards give reduced output since all of them have to fail individually to increment the counter?So if 5/6 fail we keep going? (Again just looking at it and tracing it in my head so maybe I'm reading wrong).
The way I was thinking about it, is that I wanted all the cards to work at above 90% efficiency and reboot as soon as any card strays beyond the threshold - this is why I did the "if and" statement and didn't iterate though "if" statements alone (I didn't know how to iterate "if and" based on an unknown number of cards lol). I had a version giving 6xOK and such but I think it's more efficient to just get 1xOK if ALL meet the 90% criteria and start the countdown as soon as anything is out of norm - and if the miner recovers, flush the counter. I observed a number of these conditions with Claymore where it recovers half the time, but then eventually craps out and the script kicks in. I haven't seen it on my Genoil rig yet since my other script has kept it in check without any softcrash for day 3 now.
A thought about the power draw as threshold measure - it is power limit/card specific and I guess people would need to tune their power threshold to their power limit so I agree it's best to use gpu util. (My cards are at 82W limit for example).
Thoughts?
The code checks each cards individually, at times (with Claymore, not Genoil) I've seen that Util (or Powerpraw) is dropping, maybe even below 90 for a few seconds. In order not to generate too much restarts I check all cards. We can lower this or make it so that each of us can decide when it should reboot.
I've combined the restart/reboot so that the first attempt is to restart miner. If that doesn't work, we reboot the machine. We might want to reset the reboot counter after a while, so we don't loose time with a full reboot.
In the first code I checked Powerdraw -> if 30 Watt less than Powerlimit there might be something wrong. Idling cards use around 10 Watt, so that works for all I think. We can combine this with Util if that helps.
So sure we can make it more advanced, we just have to determine the right parameters. Hope others can let us know in what circumstances they see hanging miners. Just one card, or more or everything? Is Util back to zero? or hanging on to 100%?
OK thanks for the clarification, it's really neat and rewarding to see different approaches to this problem
Here is why I coded to test that all the cards meet the threshold as one with "if &&": as an example I'll use an event from from my test rig overnight: one card dropped, the "if &&" script waited for claymore to recover for one minute, then booted the system and that was that.
Total down time, 2 mins, if you add the 1 minute of reduced capacity waiting for the miner to right itself, 3 minutes impact.
The "if &&" code does tests for a graceful miner recovery - by continuing to test the cards for above threshold utilization for 60 seconds after it detects a fault.
If the miner recovers, but just sits there (saw both Claymore/Genoil do exactly that a number of times) that's not good enough and the system gets a boot.
My other miner restart script did not handle this exact case and once every few days I would find the miner sitting pretty and blowing bubbles mining on one or two cards until I noticed because it did not "see" all the cards anymore but it did see some so it thought it "recovered".
If the miner recovers properly, all cards need to hit above threshold and we can flush the counter and life goes on.
On my test rig, graceful miner recovery occurred 5-6 times in the past 24 hours without prompting a restart - which is desirable above either running at reduced capacity or 5-6 reboots (IMHO).
In contrast - if we test each card independently and increment the error counter one by one until it reaches the number of GPU's, then - depending on the number of cards in the system it could take a long time for all of them to fail - the more cards, the more time to fail (right? am I misunderstanding anything?) So the same event, would unfold differently: the test rig would continue at reduced capacity until COUNT reaches # of GPU's - but since it resets at next check, we can hobble on 5,4,3,2, 1 card until they all die or and the script kicks in or we freeze and require a manual intervention. This could be hours of impact (again if I'm reading this wrong, my apologies, but this is what I'm getting out of looking at it.)
So IMHO, by testing that all the cards meet the 90% utilization threshold (as one, all or nothing = if &&), we avoid hours of impact/decreased capacity. My other concern is that as soon as cards start dropping off one at a time the system gets unstable, increasing the risk of a hang or corrupted file system due to a hard crash.
My view is that it should be cycled at maximum stability for a graceful restart.
Maybe there is a third approach not considered yet, Thoughts?
... edit:
Actually one more thought - I did not test for this yet so I don't know the answer - but in the case where the miner does not see all the cards anymore, does this mean that nvidia-smi ALSO does not see all the cards anymore? If so, and if we get the number of cards from nvidia-smi, wouldn't the script assume that the rig has the right number of cards every time that nvidia-smi stop seeing one? I do recall cards disappearing even from nvidia-smi but I never kept track of this so I don't know how often this condition actually occurs.
Thanks for explaining, and you do have valid points here. Like your thinking. I will rework it with this in mind.
Just wondering: Your script reboots the rig, if the miner itself does not recover. Instead we could introduce reloading miner as the first step here. In my experience that resolves the issue almost every time. It will only save 1-2 minutes so it's not a big deal to just reboot (still had the boot time of V0014 in mind).
I did not experience that nvidia-smi looses a card while it's there, but I can imagine that happens with faulty risers. Perhaps we can run the card number count nvidia-smi only at startup the number of cards (saves a call as well) and keep that number during the watchdog process. If we loose a card we do have to reboot anyway.
One other thought: Perhaps it would be an idea to echo the output of the log to a screen (tail -f) so the former reboots are shown as well?