Author

Topic: [OS] nvOC easy-to-use Linux Nvidia Mining - page 295. (Read 418257 times)

legendary
Activity: 1260
Merit: 1009
is it safe to change the password of m1?
i need A stronger password to put my rigs into IDC.

Yes, I do it as soon as it boots the first time. No issues.

Remember to change the root password as well; it is also miner1 by default.
legendary
Activity: 1260
Merit: 1009
Hey Guys,

Have a problem with loading the OS as it tells me "xorg PROBLEM DETECTED" and then reboots and shows:
error: unknown filesystem
grab rescue>


What can it be and how can I solve this? Used flashing tools as described and tried it at least twice. I am using ASrock h110 and at the moment just one Manli P106-100 card just so I can test if I can install the OS before installing all 13 cards.



I need one or two of the:

P106-100

to test and ensure nvOC will properly support these GPUs.  A number of members have had problems using these GPUs.  If someone is willing to sell me 1 or preferably 2 Please pm me.
legendary
Activity: 1260
Merit: 1009
legendary
Activity: 1260
Merit: 1009
First of all thanks for the awesome system.
It would be nice if you implement "Scott Alfter" mining pool hub switch https://gitlab.com/salfter/mph_switch to 1bash same as his nicehash switch.

Thanks again.


I will integrate Scott's MPH switch.
legendary
Activity: 1260
Merit: 1009
Hi Fullzero

I want to share with you a GPU failed that the watchdog is not able to detect

wdog screen:

GPU UTILIZATION:  Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU

/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
Tue Jul 25 16:57:01 CEST 2017 - All good! Will check again in 60 seconds


GPU UTILIZATION:  Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost. Reboot the system to recover this GPU

/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:09:00.0:: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
Tue Jul 25 16:58:01 CEST 2017 - All good! Will check again in 60 seconds


the miner show/detect only 6 GPU over 7

nvidia-smi doesn't work
$ nvidia-smi
Unable to determine the device handle for GPU 0000:09:00.0: GPU is lost.  Reboot the system to recover this GPU

temp screen:
Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0
Terminating early due to previous errors.
Tue Jul 25 17:01:07 CEST 2017 - All good, will check again soon

GPU 0, Target temp: 61, Current: 60, Diff: 1, Fan: 75, Power: 123.46

GPU 1, Target temp: 61, Current: 60, Diff: 1, Fan: 63, Power: 124.62

GPU 2, Target temp: 61, Current: 59, Diff: 2, Fan: 77, Power: 119.23

GPU 3, Target temp: 61, Current: 60, Diff: 1, Fan: 68, Power: 120.72

GPU 4, Target temp: 61, Current: 59, Diff: 2, Fan: 57, Power: 124.26

GPU 5, Target temp: 61, Current: Unable, Diff: 61, Fan: to, Power: determine

/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 125: [: Unable: integer expression expected
/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 158: [: the: integer expression expected
/home/m1/Maxximus007_AUTO_TEMPERATURE_CONTROL: line 171: [: to: integer expression expected
GPU 6, Target temp: 61, Current: 55, Diff: 6, Fan: 50, Power: 126.76

Tue Jul 25 17:01:37 CEST 2017 - Restoring Power limit for gpu:6. Old limit: 125 New limit: 75 Fan speed: 50

Provided power limit 75.00 W is not a valid power limit which should be between 115.00 W and 291.00 W for GPU 00000000:0A:00.0
Terminating early due to previous errors.
Tue Jul 25 17:01:37 CEST 2017 - All good, will check again soon


I believe this is the exact problem that Maxximus007 recently made a new code block to resolve.

Fullzero,

I'm getting this error as well, and looks like watchdog is not rebooting the system.
I believe I have the latest bash files.
are Maxximus007's changes to resolve this issue in the current bash files?
Thank you.



GPU UTILIZATION:  Unable to determine the device handle for GPU 0000:01:00.0: GPU is lost. Reboot the system to recover this GPU

/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Unable: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: determine: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: device: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: handle: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: for: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: 0000:01:00.0:: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: is: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: lost.: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: Reboot: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: the: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: system: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: to: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: recover: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: this: integer expression expected
/home/m1/IAmNotAJeep_and_Maxximus007_WATCHDOG: line 44: [: GPU: integer expression expected
Sat Jul 29 21:07:09 PDT 2017 - All good! Will check again in 60 seconds



The newest watchdog download link is at the top of the OP in purple.  It resolves this problem, and is more effective; it should not have a false positive reboot at all.
legendary
Activity: 4326
Merit: 8914
'The right to privacy matters'
Hi,

I got a gainward 1060 today and it is the first card that ignores the power-limit out of 8 cards (gigabyte, asus) wether I set it by bash or nvidia-smi. ie. sucks 100w/80w. card damaged?

still using v0017

btw: any planned date for v0019? I much appretiate this OS!

best regards



I am not familiar with gainward.  The GPU may not be properly recognized by X; in turn causing OC and PL to not work on it.  I usually test GPUs that have a problem in a rig; by moving them to a mobo 16x slot direct (only GPU on the mobo) and testing if the same problems manifest.

There are a lot of changes for v0019; I want to test them before releasing.  

skunk on zpool please  thanks phil
legendary
Activity: 1260
Merit: 1009
Hello guys,

Any of you mining Pascal Lite? If so can you please share your PASL coin code part from oneBash (would prefer DUAL though), would like to create an account with PASL.

Tried with accounts.pascallite.com, but don't have any PASL on Cryptopia to buy (it just costs 0.05 PASL though), so want to try pasl.fairpool.xyz as they said they will give an account if I mine with my public wallet key for 12.25 PASL.

I've tried to edit and add on my oneBash, it does nothing but crashing/hanging my RIG.

Thanks in advance.

&&

Would also like to share my stable OC settings with ASUS DUAL 1060 6GB.

cc : -100; mc:  1100; pl  :  90W

giving me 170MH for ETHASH and 200MH for LBC (dcri-40)

Hope it help some new users trying for stable OC, also happy to get some advice if I can increase the yield.

Thanks and good luck with our mining.

Hail fullZero for creating and expanding this OS with amazing support.

Long live CRYPTO & fullZero...

I have tried adding the following code to my one bash as per the instructions on replies, but still my rig is crashing when I tried to mine PASL. Though I want DUAL_ETC_PASL, I've tried just PASL using this code.

Code:
COIN="PASL"

Code:
PASL_WORKER="$IP_AS_WORKER"
PASL_ADDRESS="xxxxxxxxxxxxxx"
PASL_POOL="stratum+tcp://mine.pasl.fairpool.xyz:4009"

Code:
if [ $COIN == "PASL" ]
then
HCD='/home/m1/pasc/sgminer'
ADDR="$PASL_ADDRESS..$PASL_WORKER"

screen -dmS miner $HCD -k pascal -o $PASL_POOL -u $ADDR -p x -p x -I 21 -w 64 -g2

if [ $LOCALorREMOTE == "LOCAL" ]
then
screen -r miner
fi

BITCOIN="theGROUND"

while [ $BITCOIN == "theGROUND" ]
do
sleep 60
done
fi

SGMINER starts but RIG hangs, can some one please help me with this...

I have figured it out, don't need to add additional code, just use the current PASC code as is, change the details of pool and account of PASC to PASL details.

Thanks; knowing this will make adding PASL faster.
legendary
Activity: 1260
Merit: 1009
question for fullzero: Is there a reason why you do not failover mining pool addresses in 1bash? I was watching my rig (for a completely separate issue which I might require help later) it lost connection to the mining pool I use (nanopool west) then the mining process tried restarted twice, then the whole rig just shut down. I tested my internet connection while the rig was restarting and trying to connect to the pool and my connection was ok. Wouldn't it be better to connect to a failover pool address and if it connects to the failover, try to re-establish with the original pool an hour later? or something along that line, perhaps incorporated into the watchdog? It's cool that it shut itself down and not waste power, but I would prefer that the rig try to connect to another pool if it can.

I haven't tested it, but for Claymore, can't you just put all the failover addresses into /home/m1/eth/9_7/epools.txt like you do on Windows?

You should be able to use the Claymore failover.

I have a general client implementation of failover planned (it will work with any mining client).  This may or may not be in v0019, but will be included in time.  And yes this will be added in via the watchdog.
legendary
Activity: 1260
Merit: 1009
After reading the ccminer readme.txt I just noticed that DMD and ZCOIN had the wrong algo commands in onebash:
DMD = dmd-gr
ZCOIN = lyra2z

Not that anyone is mining these anymore Smiley

I will change these for the next 1bash.  Thanks jlbaseball11
legendary
Activity: 1260
Merit: 1009
fullzero thanks for such a great and handy os for mining, real timesaver.

Currently mining SIGT with 1070 and having around 19-19.75 MH/s per card (100 COC, 1300 MOC, 125 PW, intensity 20). Tried different settings, tweaked to 20MH/s, but it wasn't stable, so this is where I stopped.

One question: Sometimes miner still crashes and I don't see the reason, screen is just getting terminated and I just see that miner restarts. Is there any log file where I could see reason for last termination. Thanks.

BTW yesterday I contacted sp too about signatum ccminer mod, I was ready to pay him 0.05 BTC for mod, but unfortunately he said he makes only mod for windows. Maybe someone can convince him there is quite big audience for him if he makes linux version.

fullzero thanks for such a great and handy os for mining, real timesaver.

Currently mining SIGT with 1070 and having around 19-19.75 MH/s per card (100 COC, 1300 MOC, 125 PW, intensity 20). Tried different settings, tweaked to 20MH/s, but it wasn't stable, so this is where I stopped.

One question: Sometimes miner still crashes and I don't see the reason, screen is just getting terminated and I just see that miner restarts. Is there any log file where I could see reason for last termination. Thanks.

BTW yesterday I contacted sp too about signatum ccminer mod, I was ready to pay him 0.05 BTC for mod, but unfortunately he said he makes only mod for windows. Maybe someone can convince him there is quite big audience for him if he makes linux version.


Click Ubuntu button on top left and type:

s

Click on system log

look thru the logs for error messages
legendary
Activity: 1260
Merit: 1009
Hi,

I got a gainward 1060 today and it is the first card that ignores the power-limit out of 8 cards (gigabyte, asus) wether I set it by bash or nvidia-smi. ie. sucks 100w/80w. card damaged?

still using v0017

btw: any planned date for v0019? I much appretiate this OS!

best regards



I am not familiar with gainward.  The GPU may not be properly recognized by X; in turn causing OC and PL to not work on it.  I usually test GPUs that have a problem in a rig; by moving them to a mobo 16x slot direct (only GPU on the mobo) and testing if the same problems manifest.

There are a lot of changes for v0019; I want to test them before releasing. 
newbie
Activity: 15
Merit: 0
hi,
i have Asrock H110 btc+ pro, and 13xMining/P106-100 (1060 6gb) GPU-z everything works fine and stable with other OS-es, but with your OS it keeps restarting and restarting because of not being able to see/find xorg file.

We managed to start the OS via GTX 1060 card and then added the Mining P106-100 (1060 6gb) cards, but no go with only the Mining P106-100 (1060 6gb) cards. can you look in to this ? i would like to use your OS but dont want to have a normal gaming card, don't need it when using all Mining cards.

Can you also make a explination on overckloking as we tried via the file but failed. we only managed to individually overclock via the Nvidia x server.

thank you.
BigSmurf
member
Activity: 119
Merit: 10
When I start the miner in single process:

1) breakdown of the CPU usage after about 20 minutes of uptime:
      1x process miner used 3:51
      14x process kworker (they used from 0:40 to 1:20)
      9x irq nvidia (they used from 0:40 to 1:03)

2) in the top twenty processes are miner, kworker, irq nvidia (nothing else)

3) if I type "ps aux | grep irqbalance" in Guake terminal i get two processes: one is root the other is m1.
What parameters can I check?

Thanks for helping dbolivar. I am stuck, because I have no experience with linux.

It's OK to have a high CPU usage time* for kworker and irq/nvidia, these account for internal kernel worker threads and interrupts for GPU I/O (expected on a multi-GPU rig mining). What I'm really looking for is how much CPU usage is for user processes, system processes, and I/O wait, that's why I asked these values. Try this: run the "top" utility, it will be constantly updating -- type "1", it will expand the third line (CPU usage summary) to show each CPU. Then paste the values here, like that:

%Cpu0  :  0.3 us,  1.0 sy,  0.0 ni, 98.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.3 us,  1.3 sy,  0.0 ni, 98.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
...
%CpuN  : ....

Regarding irqbalance, running the command I suggested, you should get one line like this:

root       874  0.0  0.0  19536  2232 ?        Ss   Aug06   0:12 /usr/sbin/irqbalance --pid=/var/run/irqbalance.pid

* EDIT: high CPU usage TIME (which accumulates until the next reboot), not a constant high CPU usage percentual.
newbie
Activity: 11
Merit: 0
Hey there --

I'm running v0018 on a single rig with 6x 1070 GPUs (1x EVGA & 5x ASUS).  I have all the GPUs over-clocked to where my total eth mining MHz is ~180MHz and I'm quite content to just let it run.

That being said, my rig crashed today.  The error on the screen was:

CUDA error in function 'search' at line 346 : the launch timed out and was terminated

This error was repeated for all 6x cudaminers (cudaminer0 thru cudaminer5) at the same system clock time, and then the rig halted.

Can anyone possibly tell me what caused this issue and what I can do to avoid it happening again?

Thanks in advance!

Fogcity




newbie
Activity: 35
Merit: 0
I tried opening two miners in two screens with some change in 1bash:

screen -dmS minerX1 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 0 1 2 3 4 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;
screen -dmS minerX2 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 5 6 7 8 9 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;

After the change there are two processes in System monitor, but CPU usage is still the same (100% one core, 20% second core).

Is it possible that System monitor doesn't show the correct CPU utilization?
Will it help if I change the CPU from G3900 to i3 or perhaps i5?

I don't have experience with your particular hardware, but from the specs of your CPU, it supports up to 16 PCIe lanes, so at least I/O shouldn't be a problem (https://ark.intel.com/products/90741/Intel-Celeron-Processor-G3900-2M-Cache-2_80-GHz), as long as you use all your cards with risers, so they all run at PCIe 1x.

You can check the following in your Linux installation:

1) In "top" or "system monitor", what's the breakdown of the CPU usage for USER, SYSTEM, NICE and WAIT? This will help identify where the bottleneck could be.

2) Which are the top 3 processes using more CPU?

3) Check if you have the process "irqbalance" running with the correct parameters: "ps aux | grep irqbalance".

When I start the miner in single process:

1) breakdown of the CPU usage after about 20 minutes of uptime:
      1x process miner used 3:51
      14x process kworker (they used from 0:40 to 1:20)
      9x irq nvidia (they used from 0:40 to 1:03)

2) in the top twenty processes are miner, kworker, irq nvidia (nothing else)

3) if I type "ps aux | grep irqbalance" in Guake terminal i get two processes: one is root the other is m1.
What parameters can I check?

Thanks for helping dbolivar. I am stuck, because I have no experience with linux.
hero member
Activity: 651
Merit: 501
My PGP Key: 92C7689C
Sorry but that's not really related to my question. It's no problem how loud the graphic cards are, they should just run at 100%. The cooler the card, the longer the lifespan. And at the moments in the room I really need them to run at 100%. They get 70 degree at 50% which is too much.

I thought I read somewhere that running the fans much past 85% won't do much in the way of additional cooling, but it will run up additional wear on the motors.  
member
Activity: 119
Merit: 10
I tried opening two miners in two screens with some change in 1bash:

screen -dmS minerX1 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 0 1 2 3 4 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;
screen -dmS minerX2 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 5 6 7 8 9 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;

After the change there are two processes in System monitor, but CPU usage is still the same (100% one core, 20% second core).

Is it possible that System monitor doesn't show the correct CPU utilization?
Will it help if I change the CPU from G3900 to i3 or perhaps i5?

I don't have experience with your particular hardware, but from the specs of your CPU, it supports up to 16 PCIe lanes, so at least I/O shouldn't be a problem (https://ark.intel.com/products/90741/Intel-Celeron-Processor-G3900-2M-Cache-2_80-GHz), as long as you use all your cards with risers, so they all run at PCIe 1x.

You can check the following in your Linux installation:

1) In "top" or "system monitor", what's the breakdown of the CPU usage for USER, SYSTEM, NICE and WAIT? This will help identify where the bottleneck could be.

2) Which are the top 3 processes using more CPU?

3) Check if you have the process "irqbalance" running with the correct parameters: "ps aux | grep irqbalance".
hero member
Activity: 651
Merit: 501
My PGP Key: 92C7689C
@salfter :

I'd like to suggest an idea to your switcher: when the most profitable algorithm is Ethash, give the option to dual-mine automatically with the second most profitable algorithm if it's able to do so.

Of course I'm oversimplifying, as the switching itself will have to consider a higher power limit, more complex profit calculation and perform a "switch inside a switch" (Ethash fixed + switching second algo). But I think we can get a few more bucks this way, at least from my short mining experience it's usually profitable to dual-mine when Ethash is the most profitable.

My somewhat limited experience is that it only squeezes out a few more cents, not dollars.  I'd also need to redo miner benchmarks, as I've been using Genoil's ethminer.
newbie
Activity: 35
Merit: 0
Hi!
I'm about to build a rig with Pentium G4400.
But after reading about Skylake issues with hyper threading don't know
should I use it or better to return and exchange it with i3?
i have a rig runs 8 cards with G3900, no problem.

I have a rig with G3900 and 10 cards, but it seems that the G3900 is bottlenecking the system. First core uses 100%, the other around 20%.

Is there a way to split the workload on both cores (If I could run EWBF miner in two terminals or is there some other way)?

Can somebody please help me, i'm a noob in linux.


nvOC is a great system. Thanks for all the work.

I tried opening two miners in two screens with some change in 1bash:

screen -dmS minerX1 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 0 1 2 3 4 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;
screen -dmS minerX2 $HCD --eexit 3 --fee $EWBF_PERCENT --cuda_devices 5 6 7 8 9 --pec --server $ZEC_POOL --user $ZECADDR --pass z --port $ZEC_PORT;

After the change there are two processes in System monitor, but CPU usage is still the same (100% one core, 20% second core).

Is it possible that System monitor doesn't show the correct CPU utilization?
Will it help if I change the CPU from G3900 to i3 or perhaps i5?

Fullzero, I see You have 13 GPUs on Asrock H110 PRO BTC (I am using the same motherboard). What kind of CPU do you have so that everything is working smoothly?

Jump to: