Author

Topic: [ mining os ] nvoc - page 131. (Read 418546 times)

full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 23, 2017, 10:59:09 AM
Hello, have some newbie questions.

1) Is ZM's miner not working in nv0019-1.3? EWBF works great but every time I attempt to use the dstm it doesn't work. Either it says "this screen is terminating" and then it eventually reboots from low gpu utilization or it says "no miner has been attached to this screen."

2) Is there a way to execute setting changes done in 1bash without rebooting?

Thank you

Regarding 1, as best I recall, there was a problem with ZM in 19.1.3 where either the location or the name of the ZM miner in /home/m1/zec/zm was out of sync with what is being called in 3main. I don't have that version handy or else I would check for you. If you go back several pages in this forum, I am sure it is discussed in detail there. As for 2, the way I do it is to kill the miner screen and wait on the watchdog to see that and kill/restart 3main. However, that can take about as long as rebooting depending upon if you are on USB or SSD.

Hope this helps.

I'm not sure if i doing it right but the fastest way i know is just close the console and start a new one. Not the green Guake terminal. The black standard terminal. It kills the miner and restarts mining.

If local easiest way to restart miner and 3main is what Rumo said, if remote and with ssh:

Code:
pkill- e screen
pkll-f 3main

newbie
Activity: 41
Merit: 0
November 23, 2017, 05:33:58 AM
Hello, have some newbie questions.

1) Is ZM's miner not working in nv0019-1.3? EWBF works great but every time I attempt to use the dstm it doesn't work. Either it says "this screen is terminating" and then it eventually reboots from low gpu utilization or it says "no miner has been attached to this screen."

2) Is there a way to execute setting changes done in 1bash without rebooting?

Thank you

Regarding 1, as best I recall, there was a problem with ZM in 19.1.3 where either the location or the name of the ZM miner in /home/m1/zec/zm was out of sync with what is being called in 3main. I don't have that version handy or else I would check for you. If you go back several pages in this forum, I am sure it is discussed in detail there. As for 2, the way I do it is to kill the miner screen and wait on the watchdog to see that and kill/restart 3main. However, that can take about as long as rebooting depending upon if you are on USB or SSD.

Hope this helps.

I'm not sure if i doing it right but the fastest way i know is just close the console and start a new one. Not the green Guake terminal. The black standard terminal. It kills the miner and restarts mining.
member
Activity: 224
Merit: 13
November 23, 2017, 05:02:13 AM
Hello, have some newbie questions.

1) Is ZM's miner not working in nv0019-1.3? EWBF works great but every time I attempt to use the dstm it doesn't work. Either it says "this screen is terminating" and then it eventually reboots from low gpu utilization or it says "no miner has been attached to this screen."

2) Is there a way to execute setting changes done in 1bash without rebooting?

Thank you

Regarding 1, as best I recall, there was a problem with ZM in 19.1.3 where either the location or the name of the ZM miner in /home/m1/zec/zm was out of sync with what is being called in 3main. I don't have that version handy or else I would check for you. If you go back several pages in this forum, I am sure it is discussed in detail there. As for 2, the way I do it is to kill the miner screen and wait on the watchdog to see that and kill/restart 3main. However, that can take about as long as rebooting depending upon if you are on USB or SSD.

Hope this helps.
member
Activity: 224
Merit: 13
November 23, 2017, 04:50:49 AM
I need help. I imaged 19-1.4 to an SSD but when I boot the 1bash file in /media/ isn't there. And when I edit the one in /home/ with my changes they get overwritten when I launch bash. Otherwise, Ubuntu is booting and running just fine. Thanks!

Unfortunately, the 1bash in /media does not work. What I do with a fresh 19-1.4 image is this sequence of steps:

1) Boot for first time and wait on first reboot. Then you should see that it comes up and starts mining on ZEC (although I don't know for who) on attached display.
2) Fix the issue with hostname being 19_1_4 by editing /etc/hostname (e.g., sudo vim /etc/hostname, change 19_1_4 to m1-desktop).
3) Deploy edited 1bash via WinSCP or edit existing locally to make changes for my coin, pools, etc. Reboot.

This is the minimal that I do with new rigs to get them up and running.

Hope this helps.
full member
Activity: 210
Merit: 100
November 22, 2017, 10:32:49 PM
Hello, have some newbie questions.

1) Is ZM's miner not working in nv0019-1.3? EWBF works great but every time I attempt to use the dstm it doesn't work. Either it says "this screen is terminating" and then it eventually reboots from low gpu utilization or it says "no miner has been attached to this screen."

2) Is there a way to execute setting changes done in 1bash without rebooting?

Thank you
newbie
Activity: 39
Merit: 0
November 22, 2017, 10:29:51 PM
I need help. I imaged 19-1.4 to an SSD but when I boot the 1bash file in /media/ isn't there. And when I edit the one in /home/ with my changes they get overwritten when I launch bash. Otherwise, Ubuntu is booting and running just fine. Thanks!
full member
Activity: 132
Merit: 100
November 22, 2017, 09:12:41 PM
This is in no means a request to speed things up but I was curious when the next version might be released?

I have eight rigs and six of them are running on older versions and two on the 19-1.4.

I was thinking of updating them all to 19-1.4 but if a newer version is coming soon I might just wait and update them all to that.

newbie
Activity: 10
Merit: 0
November 22, 2017, 07:01:46 PM
Hi papampi,

This issue was with a mix of cards like the P106 and 1070 in a frame setting the OC speeds requires that the nvidia command to set the GPU or memory OC needs to be the [2] for some and [3] for others.  The loop looks like it was designed to try both [2] [3] if NORMAL=YES above, for each gpu but the gpu variable was incremented in the inner loop so you would get [2] on gpu[0] then [3] on gpu[1],  [2] on gpu [2] and so on. By moving the gpu increment out of the for loop it and into the outer while loop I believe it works as intended.  I added the two echo's in here to show the behavior.  

Original loop:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 3
nvidia-settings -a [gpu:3]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:3]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:3]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:3]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 4
nvidia-settings -a [gpu:4]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:4]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:4]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:4]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 5
nvidia-settings -a [gpu:5]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:5]/GPUMemoryTransferRateOffset[3]=1300


===============================================================
By moving the gpu increment to the while loop the execution looks like:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:1]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:1]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.







if [ $P106_100_FULL_HEADLESS_MODE == "NO" ]
then

gpu=0
while [ $gpu -lt $GPUS ]
do
for j in $TI
do
CORE=${__CORE_OVERCLOCK[${gpu}]}
MEM=${MEMORY_OVERCLOCK[${gpu}]}
echo "TI is $TI, J is $j and GPU is $gpu"
echo "${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE"
echo "${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM"
${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE
${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM
done
gpu=$(($gpu+1))
done







Thanks a lot for the detailed explanation
Added your fix for next release.

If you found any more fixes please let us know to add for next releases.

Would this be why my 2 1050 Ti's show as ERR! in the power chart when it first loads and ewbf reports them as 0 Hash/watt?  They're running alongside 6 1060 6GB cards and they all seem to be working correctly (appropriate hash rates per card).  Just the error bothered me was all. Thanks!
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 22, 2017, 10:22:27 AM
Hi papampi,

This issue was with a mix of cards like the P106 and 1070 in a frame setting the OC speeds requires that the nvidia command to set the GPU or memory OC needs to be the [2] for some and [3] for others.  The loop looks like it was designed to try both [2] [3] if NORMAL=YES above, for each gpu but the gpu variable was incremented in the inner loop so you would get [2] on gpu[0] then [3] on gpu[1],  [2] on gpu [2] and so on. By moving the gpu increment out of the for loop it and into the outer while loop I believe it works as intended.  I added the two echo's in here to show the behavior.  

Original loop:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 3
nvidia-settings -a [gpu:3]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:3]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:3]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:3]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 4
nvidia-settings -a [gpu:4]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:4]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:4]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:4]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 5
nvidia-settings -a [gpu:5]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:5]/GPUMemoryTransferRateOffset[3]=1300


===============================================================
By moving the gpu increment to the while loop the execution looks like:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:1]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:1]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.







if [ $P106_100_FULL_HEADLESS_MODE == "NO" ]
then

gpu=0
while [ $gpu -lt $GPUS ]
do
for j in $TI
do
CORE=${__CORE_OVERCLOCK[${gpu}]}
MEM=${MEMORY_OVERCLOCK[${gpu}]}
echo "TI is $TI, J is $j and GPU is $gpu"
echo "${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE"
echo "${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM"
${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE
${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM
done
gpu=$(($gpu+1))
done







Thanks a lot for the detailed explanation
Added your fix for next release.

If you found any more fixes please let us know to add for next releases.
newbie
Activity: 7
Merit: 0
November 22, 2017, 10:05:08 AM
Hi papampi,

This issue was with a mix of cards like the P106 and 1070 in a frame setting the OC speeds requires that the nvidia command to set the GPU or memory OC needs to be the [2] for some and [3] for others.  The loop looks like it was designed to try both [2] [3] if NORMAL=YES above, for each gpu but the gpu variable was incremented in the inner loop so you would get [2] on gpu[0] then [3] on gpu[1],  [2] on gpu [2] and so on. By moving the gpu increment out of the for loop it and into the outer while loop I believe it works as intended.  I added the two echo's in here to show the behavior. 

Original loop:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 3
nvidia-settings -a [gpu:3]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:3]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:3]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:3]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 4
nvidia-settings -a [gpu:4]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:4]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:4]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:4]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 5
nvidia-settings -a [gpu:5]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:5]/GPUMemoryTransferRateOffset[3]=1300


===============================================================
By moving the gpu increment to the while loop the execution looks like:


TI is 2 3, J is 2 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 0
nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:0]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:0]) assigned value 1500.

TI is 2 3, J is 2 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[2]=1300

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:1]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:1]) assigned value 1300.

TI is 2 3, J is 3 and GPU is 1
nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:1]/GPUMemoryTransferRateOffset[3]=1300




TI is 2 3, J is 2 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[2]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[2]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.

TI is 2 3, J is 3 and GPU is 2
nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[3]=-200
nvidia-settings -a [gpu:2]/GPUMemoryTransferRateOffset[3]=1500

  Attribute 'GPUGraphicsClockOffset' (19_1_4:0[gpu:2]) assigned value -200.


  Attribute 'GPUMemoryTransferRateOffset' (19_1_4:0[gpu:2]) assigned value 1500.







if [ $P106_100_FULL_HEADLESS_MODE == "NO" ]
then

gpu=0
while [ $gpu -lt $GPUS ]
do
for j in $TI
do
CORE=${__CORE_OVERCLOCK[${gpu}]}
MEM=${MEMORY_OVERCLOCK[${gpu}]}
echo "TI is $TI, J is $j and GPU is $gpu"
echo "${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE"
echo "${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM"
${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE
${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM
done
gpu=$(($gpu+1))
done



full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 22, 2017, 05:08:22 AM
Hi,

I was looking at the loop code for setting individual clock limits and I may have found a bug, I have a mix of 1070's and P106 boards in this mining frame.  

When I looked at the code in 3main, the gpu=$(($gpu+1)) was up one line in the for loop, if I am reading the for loop correctly, you would want that to iterate from 2 to 3 for each GPU, do I moved it down one line as seen below:

Code:
gpu=0
while [ $gpu -lt $GPUS ]
do
for j in $TI
do
CORE=${__CORE_OVERCLOCK[${gpu}]}
MEM=${MEMORY_OVERCLOCK[${gpu}]}
${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE
${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM
done
gpu=$(($gpu+1))
done
Don





Thanks for the info on 3main problem,
Can you please explain more so we include your fix in next release ?

What problems cause the current code and what your suggestion does.
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 22, 2017, 03:33:50 AM
Hi,

I was looking at the maxximus007_auto_temperature_control script and I think you may want to make the following change to allow for 3 digit power levels, it was truncating a board at 110w to 10 and trying to reset it to 110 when it was already there.

Don


POWERLIMIT=$PWRLIMIT
POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -6 | head -c -3 ) # changed tail -c -5 to -c -6


Yup, it mentioned before,
Thanks any way.

It will be fixed in next release.
newbie
Activity: 96
Merit: 0
November 22, 2017, 12:01:06 AM



My power set to 151, i am using  corsair 1500 watt and its only at 600 watts /
newbie
Activity: 7
Merit: 0
November 21, 2017, 11:29:43 PM
Hi,

I was looking at the maxximus007_auto_temperature_control script and I think you may want to make the following change to allow for 3 digit power levels, it was truncating a board at 110w to 10 and trying to reset it to 110 when it was already there.

Don


POWERLIMIT=$PWRLIMIT
POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -6 | head -c -3 ) # changed tail -c -5 to -c -6
newbie
Activity: 7
Merit: 0
November 21, 2017, 10:08:24 PM
Hi,

Hi All,

I absolutely love nvOC.


I was looking at the loop code for setting individual clock limits and I may have found a bug, I have a mix of 1070's and P106 boards in this mining frame. 

When I looked at the code in 3main, the gpu=$(($gpu+1)) was up one line in the for loop, if I am reading the for loop correctly, you would want that to iterate from 2 to 3 for each GPU, do I moved it down one line as seen below:

gpu=0
while [ $gpu -lt $GPUS ]
do
for j in $TI
do
CORE=${__CORE_OVERCLOCK[${gpu}]}
MEM=${MEMORY_OVERCLOCK[${gpu}]}
${NVD} -a [gpu:$gpu]/GPUGraphicsClockOffset[${j}]=$CORE
${NVD} -a [gpu:$gpu]/GPUMemoryTransferRateOffset[${j}]=$MEM
done
gpu=$(($gpu+1))
done

Really like the telegraph functions in this release - setting time to a couple of intervals daily after getting a mining frame stable is a nice production feature.

Don



newbie
Activity: 32
Merit: 0
November 21, 2017, 09:24:30 PM
Hi.  I'm having some trouble getting the clock and mem offsets to stick from the 1bash file.  The script returns that those attributes are "read only."  Hopefully I'm missing something simple, but it's had me stumped for a few days.
newbie
Activity: 66
Merit: 0
November 21, 2017, 03:41:02 PM
Hi, are we using the best zcoin lyra2z miner on the v0019-1.4?  Many people are saying that they can get 3000 kH/s per 1080ti, but I'm only getting 2300-2500.
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 21, 2017, 01:12:02 PM


Screen resolution is messed up and mining never begins


I had the resolution messed up once, and it was a faulty GPU.
But since you said SMOS works fine then that can not be the case

Are you sure you did not connect onboard gpu to monitor at first boot?
And are you sure the image is clean?

What I do for setting up rigs is I set one nvoc on a SSD, make all the changes I need, make sure every thing is ok, then I set it as source and dont use it ever again
Then clone all the next rigs SSD with one of these Offline HDD Clone Disk Stations, its the easiest and fastest way, mostly for you with all the rigs you made for your mining youtube channel.

member
Activity: 224
Merit: 13
November 21, 2017, 12:59:39 PM
Whats the deal? I have this issue with every rig thats more than 3 cards on nvOC

gigabyte 270 d3 mobo g4400 or g4400t cpu ddr4 4 gb ram risers 6x 1080 TI dual psu

I have had this problem on 4 6 card builds and a 12 card with the h110 . .smOS works perfect on the same rigs

Screen resolution is messed up and mining never begins

Hi VoskCoin,
This is strange, as I have nvOC v0019-1.4 running on a h110 mobo with 12 GTX1060 flawlessly.
Your monitor output should be connected to the GPU on the main PCIe x16 slot (the larger one).
Did you edit the 3main file?
[/quote]
Must be nice T_T
No just the 1bash

Are you editing your 1bash on your PC or on the actual rig?
[/quote]

Let's start by fixing those unable to resolve host errors. The easiest way is to change the hostname to match what is already in /etc/hosts. What you want to do is edit /etc/hostname and change 19_1_4 to m1-desktop. Then reboot and we will see how far that gets us.

Thanks.
sr. member
Activity: 1414
Merit: 487
YouTube.com/VoskCoin
November 21, 2017, 11:38:26 AM
Whats the deal? I have this issue with every rig thats more than 3 cards on nvOC

gigabyte 270 d3 mobo g4400 or g4400t cpu ddr4 4 gb ram risers 6x 1080 TI dual psu

I have had this problem on 4 6 card builds and a 12 card with the h110 . .smOS works perfect on the same rigs

Screen resolution is messed up and mining never begins

Hi VoskCoin,
This is strange, as I have nvOC v0019-1.4 running on a h110 mobo with 12 GTX1060 flawlessly.
Your monitor output should be connected to the GPU on the main PCIe x16 slot (the larger one).
Did you edit the 3main file?
Must be nice T_T
No just the 1bash

Are you editing your 1bash on your PC or on the actual rig?
Jump to: