Pages:
Author

Topic: Mining Rig Down-The Frustration is Real - page 2. (Read 1106 times)

member
Activity: 210
Merit: 10
November 27, 2017, 05:48:49 PM
#12
because I just had this issue with my mining rig last week, my money is on a bad riser.

I used MintCell on amazon and powering with a PCIe cable from PSU and fixed all my issues.

I'm going to try turning on turning on another GPU tonight and see if the issue happens.  Then when the next 2 arrive I'll swap out what I can.

It seems unlikely that both are bad, so what would you guys check next if swapping out the risers doesn't work?
newbie
Activity: 34
Merit: 0
November 27, 2017, 05:02:57 PM
#11
because I just had this issue with my mining rig last week, my money is on a bad riser.

I used MintCell on amazon and powering with a PCIe cable from PSU and fixed all my issues.
full member
Activity: 224
Merit: 100
CryptoLearner
November 27, 2017, 04:48:53 PM
#10
Windows 10 Pro-Version 1703

Negative; I have a single PCIE cable coming from the PSU that branches into the 6 and the 8 Pin.  Is that incorrect?

Ok, well i prefer to use 2 cable not one splitted for more stable power, but it should be ok if you powered the risers and the motherboard, also you don't go pass login in windows, and the cards don't pull any power @ this point. Well maybe a bad riser then.
member
Activity: 210
Merit: 10
November 27, 2017, 04:45:36 PM
#9
Answered in order:

My PSU has enough individual outputs on it that I run a separate cable from the PSU to each GPU.  I felt this would be the safest way to avoid voltage drop despite the snaggle of cables.  I'm getting tempted to see what voltage each one is seeing.

At this point, I can't even get past the Windows login screen with 4 GPUs on let alone changing my undervoltage levels  Undecided

I sure hope so!  I picked the 1000W unit as I thought it would be enough to max out the MB with PCI slots.  The highest draw I've seen at 72% is roughly 575W at the wall using a Kil-o-watt meter.

I don't believe so but I'm sure I can check.  The MB comes with 1 x PCIe 2.0 x16, 5 x PCIe 2.0 x1  I'd have to assume it was stock 2.0.  Hell I didn't even know that was a selectable thing, figured it was a hardware protocol.


Well if it locks out even before you enter windows that's odd indeed. Which version are you running ? W10 Enterprise patched anniversary ?
I don't think gen1 would help, but doesn't hurt since it save up a bit of watts (5 to 10)

Also the 1070 you use have 6pin + 8 pin power input, did you powered them both with different cables right ?

Windows 10 Pro-Version 1703

Negative; I have a single PCIE cable coming from the PSU that branches into the 6 and the 8 Pin.  Is that incorrect?
full member
Activity: 224
Merit: 100
CryptoLearner
November 27, 2017, 04:22:32 PM
#8
Answered in order:

My PSU has enough individual outputs on it that I run a separate cable from the PSU to each GPU.  I felt this would be the safest way to avoid voltage drop despite the snaggle of cables.  I'm getting tempted to see what voltage each one is seeing.

At this point, I can't even get past the Windows login screen with 4 GPUs on let alone changing my undervoltage levels  Undecided

I sure hope so!  I picked the 1000W unit as I thought it would be enough to max out the MB with PCI slots.  The highest draw I've seen at 72% is roughly 575W at the wall using a Kil-o-watt meter.

I don't believe so but I'm sure I can check.  The MB comes with 1 x PCIe 2.0 x16, 5 x PCIe 2.0 x1  I'd have to assume it was stock 2.0.  Hell I didn't even know that was a selectable thing, figured it was a hardware protocol.


Well if it locks out even before you enter windows that's odd indeed. Which version are you running ? W10 Enterprise patched anniversary ?
I don't think gen1 would help, but doesn't hurt since it save up a bit of watts (5 to 10)

Also the 1070 you use have 6pin + 8 pin power input, did you powered them both with different cables right ?
member
Activity: 210
Merit: 10
November 27, 2017, 04:09:55 PM
#7
Ok sound like your setup is ok,

How do you power the 8+6 pins gpu power input ? 1 cable with a Y separation ? 2 cables ?

Have you tried to use for example 60% powerlevel ? does the issue happen before mining or at mining start ?

Your psu should be enough, with this configuration you should reach about 650w at wall.

Maybe a bad riser, but it seems odd it crash the rig completly. Are you using gen1 pci in motherboard bios ?
Answered in order:


My PSU has enough individual outputs on it that I run a separate cable from the PSU to each GPU.  I felt this would be the safest way to avoid voltage drop despite the snaggle of cables.  I'm getting tempted to see what voltage each one is seeing.

At this point, I can't even get past the Windows login screen with 4 GPUs on let alone changing my undervoltage levels  Undecided

I sure hope so!  I picked the 1000W unit as I thought it would be enough to max out the MB with PCI slots.  The highest draw I've seen at 72% is roughly 575W at the wall using a Kil-o-watt meter.

I don't believe so but I'm sure I can check.  The MB comes with 1 x PCIe 2.0 x16, 5 x PCIe 2.0 x1  I'd have to assume it was stock 2.0.  Hell I didn't even know that was a selectable thing, figured it was a hardware protocol.



full member
Activity: 224
Merit: 100
CryptoLearner
November 27, 2017, 02:21:47 PM
#6
Ok sound like your setup is ok,

How do you power the 8+6 pins gpu power input ? 1 cable with a Y separation ? 2 cables ?

Have you tried to use for example 60% powerlevel ? does the issue happen before mining or at mining start ?

Your psu should be enough, with this configuration you should reach about 650w at wall.

Maybe a bad riser, but it seems odd it crash the rig completly. Are you using gen1 pci in motherboard bios ?
member
Activity: 210
Merit: 10
November 27, 2017, 02:17:44 PM
#5
Do you use self-powered risers ? did you connected the additionnal molex power connectors on each side of the PCI-express slots on the motherboard ? What powerlevel you use ?

I'm using these risers: https://www.amazon.com/gp/product/B06XGTM694/ref=oh_aui_detailpage_o05_s00?ie=UTF8&psc=1

I'm using the molex connectors that came with the PSU; each cable feeds 2 risers.  Then each GPU has it's own dedicated cable back to the PSU.  I also do have both molex connectors attached to the MB to stabilize PCI at the voltage per ASRock instructions.  I normally sit around 72% power with the Afterburner.

I've got 2 more risers coming that I'll try swapping out of the non functional GPUs tomorrow.
member
Activity: 210
Merit: 10
November 27, 2017, 02:13:22 PM
#4
Hey all,

Last night my screen began blinking on and off with some red pixelation.  It became so bad that eventually there was more screen off time than on.
What was going on before that? Has this rig been happily mining for days/weeks/months, or is it a new system you've just built?

I started back in late august with just (2) 1070s.  At that point, it ran well.  When I added on the 3rd, I did notice some minor blinking on occasion when I probed the overclock limits of each board but went away after a restart and toned back Afterburner.  After the 4th GPU came on board it started happening more frequently but again was fixed with a restart.  As of last night, the restart no longer worked which is when I started going over everything, cleared the drivers, and went to NVIDIA.
full member
Activity: 224
Merit: 100
CryptoLearner
November 27, 2017, 02:07:31 PM
#3
Do you use self-powered risers ? did you connected the additionnal molex power connectors on each side of the PCI-express slots on the motherboard ? What powerlevel you use ? Sound to me a power issue, feel free to share more infos Smiley
legendary
Activity: 1106
Merit: 1014
November 27, 2017, 01:31:54 PM
#2
Hey all,

Last night my screen began blinking on and off with some red pixelation.  It became so bad that eventually there was more screen off time than on.
What was going on before that? Has this rig been happily mining for days/weeks/months, or is it a new system you've just built?
member
Activity: 210
Merit: 10
November 27, 2017, 12:41:30 PM
#1
Hey all,

Last night my screen began blinking on and off with some red pixelation.  It became so bad that eventually there was more screen off time than on.  I contacted NVIDIA tech support thinking it was a GPU issue and after 3 hours of chat tech support with them on my laptop, I gave up and went to bed.  Here's what I know in a nutshell:

1) I have a Corsair HX1000i PSU, ASRock BTC H81 MB, 8 GB ram, (4) MSI 1070 GTX 8 GB GPUs, and a celeron cpu.
2) NVIDIA tech and tried musical chairs with the riser seating (BTW they said USB risers are NOT supported by these boards) to no avail
3) I have flashed my BIOS and updated the GPU drivers to the latest
4) Each GPU works individually
5) No issues found in msinfo according to NVIDIA
4) Currently I have (2) cards up and running.  As soon as I add in the additional cards, the problem comes back.

This leads me to think there's either a MB issue (I have a help request to ASRock), a power issue, or a riser issue.  I really despise not knowing where to take it from here and having essentially have $1K sitting around doing nothing.  Anyone that can help me get up and running I'll even send a token of my appreciation.  If this is in the wrong topic area, I apologize in advance.

Thanks
Pages:
Jump to: