I've never used MSI afterburner but I've overclocked 1060's to +1500 mem clock (9500MHZ total) using the nvidia-settings tool from nvidia on linux.
Can you share how you did this with nvidia-settings? Did you use GUI or command line? Do you have examples of command line or xorg.conf file entries? I had limited luck with this in Linux. I assume you're using latest nvidia drivers or an older version?
I'm using the latest nvidia drivers (381.22 I think).
I use the command line but I believe there is a GUI.
For some reason nvidia-settings needs an xserver running, I use the following command to generate an xorg.conf with "dummy" screens.. found it somewhere online:
nvidia-xconfig -a --allow-empty-initial-configuration --cool-bits=28 --use-display-device="DFP-0" --connected-monitor="DFP-0"
You need to run this whenever you add a new GPU. I think there are higher cool bits settings but not sure what they are needed for.
Now you need to startx. If you are doing this over SSH you need to add
to
. You can check X is running on each card by using
.
You can now use nvida-settings to over/underclock your GPU.
For example, the following would overclock my first GPU's [gpu:0] memory by 1500MHZ.
export DISPLAY=:0
nvidia-settings -c $DISPLAY -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1500
I export DISPLAY and also provide the -c option to nvidia-settings as it sometimes seems to fail without both.
There are a bunch of other attributes you can change. I change my power limit using nvidia-smi but perhaps you can do it with nvidia-settings as well.
Linux tools double readings compared to Afterburner so that's +750 on Windows
That is very confusing. Which tool gives the "real" value, afterburner or the nvidia tools?
Thanks!
Sorry for quoting the whole thing, but I wanted to comment on several areas. The CoolBits option opens up certain settings to be modified. The 28 refers to activating 3 different bits (16+8+4). I think they are for voltage mods, clock mods, and something else. There are also a couple other bits (2 and 1) so you could do CoolBits up to at least 31. I think those may be for features on older cards. In any case, they're not important and 28 should be fine. 24 might even work...I forget what that one bit was for, but 28 certainly won't hurt anything. You can google the NVIDIA CoolBits option to see exactly what each bit does.
nvidia-settings was giving me some grief when I was trying to do some of this before. I found equivalent settings in nvidia-smi and played with the max power level, but Claymore didn't like what I was doing and had problems. I think default power was 120W and even lowering that to 110W caused problems. It could be that nvidia-smi is just outdated and no longer really supported. I'll take another look at the nvidia-settings command line arguments. Assuming I get them working correctly on the command line, I'll try to get them integrated into the xorg.conf file and hopefully that will get everything configured automatically at startup.
BTW, on the "GPUMemoryTransferRateOffset[3]" setting, is it correct to assume that the "3" refers to the performance level (ranges 0 to 3) and that this would only have an effect when the card is in that mode? If so, it SHOULD be in that top level whenever you're mining, but I did notice in the GUI that the cards tended to stay in level 2 for some reason. I'll play with it and report what I find.
If anyone knows the proper syntax to put this in the xorg.conf, please post. I believe it's in the DEVICE section (there's one DEVICE section for each GPU) and begins with "Option" and then the setting name followed by the desired value. I currently have a line for each device that sets CoolBits to 28. I think it looks like:
"Option" "CoolBits" "28"
...or something pretty close to that as I recall.