while we are on the programming topic... I really can't stand the misuse of logs to pass information/variables between the scripts. not only that the logs slow down everything, they are killing the usb sticks, causing all kinds of problems, hangs, freezes, etc. and i see more and more people are recomanding usage of screenlog
how about using named pipes (fifo) to pass info between the scripts. nothing is saved/logged by using a named pipe and it is much faster. leave the logs for what they were really ment to: to record problems
example: we want to get the fan speed and temps from the temp control so we can send it trough telegram. the easy solution is to write all those values from the temp control script into a log, then read from the log and send telegram. the problem is that in order to satisfy the telegram, the temp script has to write line by line that info for each gpu into a log, 24/7 so that the info is there when telegram needs it.
if we rewrite the temp control to keep sending all that info to a named pipe, the info will be there when telegram needs it without writing to a log over and over, 24/7, and write to a log only critical info/errors that would be needed for troubleshooting
i started rewriting the temp control and i intend to incorporate the named pipes but that will break compatibility with telegrams, web stats, and whatever else is fetching the info from logs. i just can't find enough free time to rewrite all the scripts in nvoc
if interested in the development and optimization of nvoc, please google "bash named pipe" and research then share your thoughts, maybe we can split the workload if you are interested
My 2cents:
While the use of named pipes should certainly reduce disk IO and save folks USB sticks, the burden would be shifted to memory. While not a problem for me since all of my rigs happen to have at least 16G of RAM, I know of several users of nvOC that go minimalist with both CPU and memory. I think the current memory usage is around 2G for nvOC so these folks build rigs with only 4G of RAM. So, when you consider the price of RAM versus the price of a small SSD, disk is cheaper. I can buy a $33 60G SSD but 16G of RAM will cost me $130.
Agreed. I like to add that (as someone said a few days a go); we spend thousands $ in a rig but go minimal on data storage units.
About log files as I already said before there should not be problem to log as long as at least a HDD is being used.
Here is an example code (from kk003_telegram) to eliminate the "-L" flag when running scripts from screen, so may be some scripts that don't need a full log file can benefit of it.
Screen has other ways to do this like hardcopy.
SLEEP_TIME_MODIFICATION_FILE=12 # Hope 12 segs is enough for screen to write enough new data and save the file
# Start log file for a few seconds and collect miner's statics
if [[ -f ~/kk003_telegram_data/kk003_screenlog.log ]]; then
> ~/kk003_telegram_data/kk003_screenlog.log # Clear my log file
fi
# Need to check here if miner exists before any "screen -dr miner"
screen -dr miner -X logfile ~/kk003_telegram_data/kk003_screenlog.log # Don't want to overwrite the user's screen log file (screenlog.0 usually)
screen -dr miner -X log # Start login miner
echo "Collecting statics from $ETHMINER_or_GENOIL_or_CLAYMORE for $SLEEP_TIME_MODIFICATION_FILE seconds..."
sleep $SLEEP_TIME_MODIFICATION_FILE # Sleep while colleting data from miner
screen -dr miner -X log # Stop login miner