Author

Topic: [ mining os ] nvoc - page 153. (Read 418546 times)

hero member
Activity: 1092
Merit: 552
Retired IRCX God
November 05, 2017, 02:30:28 PM
Since when can a 1070 hit 58mh/s?
I don't do ETH (nor do I understand doing it with NV cards), but: https://www.youtube.com/watch?v=cdeA7s9SmRY
newbie
Activity: 36
Merit: 0
November 05, 2017, 02:04:27 PM
...Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
I'll get my "DOH!", for mining ETH with that many NVIDIA cards, out of the way right off the bat.

That being out of the way:

Granted, in the real world, the loss isn't 1:1; however, for ease of math, we'll pretend it is.
If you have a 150 TDP card and you down the output by 30%, then you have taken a 1500W set of cards and lowered them to 1000W. Now you have a 500W reduction in power that is the same as the total amount of power required to power 3.3333 cards at full power (for ease of math we will call this 3 cards). So, you have an effective rate of 7 cards and have 10 cards sitting on the rack. To what end?

Yes, it's at the lower end of stable, but what is the point?

Not counting the 1060s and your other rig(s) that make up your other 8 cards....
Even if my numbers are off by 1/2, and we pretend you paid wholesale ($375) prices for those cards, you have $624 worth of cards sitting idle the save $438 per year in consumption while giving up 49% of your potential earnings (by running cards at hashrates of as low as 30 when they can hit as high as 58).

It's something that makes less and less sense the more and more cards you run.

Since when can a 1070 hit 58mh/s?
member
Activity: 104
Merit: 10
November 05, 2017, 01:41:26 PM
Been busy lately; l will try to respond to the pm's I haven't gotten to and posts in the thread either tonight or tomorrow.

I will explain how the execution logic works in nvOC.

There are some problems with the newest Nvidia driver; so I will roll it back for the next update. 



Hi my idol, if you have time can you help me how to add more algo on nicehash auto switch, i just want to add Cryptonight on it, but i encounter errors when i try to add it on the code on the 3main together with other algo. thanks more power
member
Activity: 224
Merit: 13
November 05, 2017, 01:39:58 PM
I installed nvoc 19 1.4 it works fine except - auto temp control
I'm constantly getting this message -

sudo: unable to resolve host 19_1_4
Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.

Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.

All done.
GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50

I've setup PL to 150W but somehow it shows Power 50
Can you help me please?
 

I don't know if this will totally fix the issue but the sudo error re: host resolution can be corrected by fixing the hostname. In 19-1.4, there is an issue in that the hostname in /etc/hosts and /etc/hostname do not match. IIRC, /etc/hostname has 19_1_4 and /etc/hosts has m1-desktop. Edit one or the other or both and make them match. If you edit /etc/hostname, you will have to reboot.

Hope this helps.
newbie
Activity: 52
Merit: 0
November 05, 2017, 01:26:18 PM
Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs).
I would like to SSH on the rigs individually from a remote location from a different IP address.

I was thinking about setting SSH on a different port on each rig, for example:
rig1 SSH on port 1024
rig2 SSH on port 1025
rig3 SSH on port 1026
And so on...

On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on

Do you think it's a good idea or are there better ways to do this?

just redirect a different port for each rig so when you connect:

XXX.XXX.XXX.XXX port 10001 for rig1 redirect to 192.168.1.11 port 22 for rig1
XXX.XXX.XXX.XXX port 10002 for rig2 redirect to 192.168.1.12 port 22 for rig2
etc...

If you are sing putty, just create a new shortcut with -P 1000x for each rig



I did the same and it works great
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 05, 2017, 01:22:53 PM
I installed nvoc 19 1.4 it works fine except - auto temp control
I'm constantly getting this message -

sudo: unable to resolve host 19_1_4
Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.

Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.

All done.
GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50

I've setup PL to 150W but somehow it shows Power 50
Can you help me please?
 

Open Maxximus007_AUTO_TEMPERATURE_CONTROL

find this line :

Code:
POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -5 | head -c -3 )

and change it with :

Code:
POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -6 | head -c -3 )

You can also edit your host name with

Code:
sudo nano /etc/hosts
sudo nano /etc/hostname
full member
Activity: 224
Merit: 100
November 05, 2017, 01:10:26 PM
Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs).
I would like to SSH on the rigs individually from a remote location from a different IP address.

I was thinking about setting SSH on a different port on each rig, for example:
rig1 SSH on port 1024
rig2 SSH on port 1025
rig3 SSH on port 1026
And so on...

On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on

Do you think it's a good idea or are there better ways to do this?

just redirect a different port for each rig so when you connect:

XXX.XXX.XXX.XXX port 10001 for rig1 redirect to 192.168.1.11 port 22 for rig1
XXX.XXX.XXX.XXX port 10002 for rig2 redirect to 192.168.1.12 port 22 for rig2
etc...

If you are sing putty, just create a new shortcut with -P 1000x for each rig

member
Activity: 126
Merit: 10
November 05, 2017, 12:55:27 PM
Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs).
I would like to SSH on the rigs individually from a remote location from a different IP address.

I was thinking about setting SSH on a different port on each rig, for example:
rig1 SSH on port 1024
rig2 SSH on port 1025
rig3 SSH on port 1026
And so on...

On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on

Do you think it's a good idea or are there better ways to do this?
member
Activity: 118
Merit: 10
November 05, 2017, 12:40:56 PM
I installed nvoc 19 1.4 it works fine except - auto temp control
I'm constantly getting this message -

sudo: unable to resolve host 19_1_4
Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.

Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.

All done.
GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50

I've setup PL to 150W but somehow it shows Power 50
Can you help me please?
 
hero member
Activity: 1092
Merit: 552
Retired IRCX God
November 05, 2017, 08:18:02 AM
...Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
I'll get my "DOH!", for mining ETH with that many NVIDIA cards, out of the way right off the bat.

That being out of the way:

Granted, in the real world, the loss isn't 1:1; however, for ease of math, we'll pretend it is.
If you have a 150 TDP card and you down the output by 30%, then you have taken a 1500W set of cards and lowered them to 1000W. Now you have a 500W reduction in power that is the same as the total amount of power required to power 3.3333 cards at full power (for ease of math we will call this 3 cards). So, you have an effective rate of 7 cards and have 10 cards sitting on the rack. To what end?

Yes, it's at the lower end of stable, but what is the point?

Not counting the 1060s and your other rig(s) that make up your other 8 cards....
Even if my numbers are off by 1/2, and we pretend you paid wholesale ($375) prices for those cards, you have $624 worth of cards sitting idle the save $438 per year in consumption while giving up 49% of your potential earnings (by running cards at hashrates of as low as 30 when they can hit as high as 58).

It's something that makes less and less sense the more and more cards you run.
fk1
full member
Activity: 216
Merit: 100
November 05, 2017, 07:15:13 AM
tyvm! Smiley
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 05, 2017, 07:11:51 AM
Hi! I am currently using nvOC 1.3 and nicehash salfter script sich is great. I also use telegram and sometimes I see two telegram messages that utilizations i 0 and another one two mins later with utilization 100%. I guess the rig is restarting but i am not sure why. Is there any logfile you can suggest me to take a look at? tyvm

e: found 5_restartlog but its empty

check this in 1bash

Code:
CLEAR_LOGS_ON_BOOT="NO"        	# YES NO
fk1
full member
Activity: 216
Merit: 100
November 05, 2017, 07:02:19 AM
Hi! I am currently using nvOC 1.3 and nicehash salfter script sich is great. I also use telegram and sometimes I see two telegram messages that utilizations i 0 and another one two mins later with utilization 100%. I guess the rig is restarting but i am not sure why. Is there any logfile you can suggest me to take a look at? tyvm

e: found 5_restartlog but its empty
newbie
Activity: 36
Merit: 0
November 05, 2017, 05:43:51 AM
By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.

Agree,
80 W for 1060 is not so low, but 100 for 1070 is too low,
what are your hash rates with 1070 ?
I run my 1070 rig at 125 W, oc 125, cc 600 getting 460-470 sol/s
and 1060 rig with 85W, oc 125, cc, 600 getting 300 sol/s

Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
member
Activity: 224
Merit: 13
November 05, 2017, 05:41:49 AM
Installed v19-1.4 yesterday on a new rig to test a new card.

sudo: unable to resolve host gtx1080ti-r1

Check your /etc/hosts file. It would appear that you are missing the entry for your miner - gtx1080ti-r1.

   m1@Miner2:~$ cat /etc/hosts
   127.0.0.1       localhost
  127.0.1.1       Miner2

   # The following lines are desirable for IPv6 capable hosts
   .
   .
   .

In the example, my host is named Miner2 and I have the necessary entry for it in my hosts file. Hope this helps.

member
Activity: 104
Merit: 10
November 05, 2017, 05:41:13 AM
Hi guys can you help me how to add more algo on SALTER_NICEHASH? I want to add cryptonight so that it can automatically switch to that algo pls tnx tnx tnx Grin

Hi Fullzero or anyone here figure it how? i tried to copy the settings of other algo and change the settings but it gets buggy and got insane income result lol
full member
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
November 05, 2017, 02:55:03 AM
By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.

Agree,
80 W for 1060 is not so low, but 100 for 1070 is too low,
what are your hash rates with 1070 ?
I run my 1070 rig at 125 W, oc 125, cc 600 getting 460-470 sol/s
and 1060 rig with 85W, oc 125, cc, 600 getting 300 sol/s
member
Activity: 224
Merit: 13
November 05, 2017, 02:41:02 AM
By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

I am with @ComputerGenie. Is there a reason you are not running individual power limits and clocks? Worst case, I would remove the 1060's and just run the 1070's correctly.
newbie
Activity: 36
Merit: 0
November 05, 2017, 02:38:08 AM
By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes

What do you mean exactly? I have a pretty small setup (~20 GPUs), so I just try to consolidate whenever possible.
hero member
Activity: 1092
Merit: 552
Retired IRCX God
November 05, 2017, 02:23:29 AM
By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just to get some feedback regarding this setup.

There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7  Roll Eyes
Jump to: