Author

Topic: SECOND GPU ONWARDS NOT WORKING - UVD NOT RESPONDING!? (Read 193 times)

full member
Activity: 1275
Merit: 141
You two should check the date of last post and first post….this thread is 10 months old!
jr. member
Activity: 115
Merit: 4
did you try not using the x16 slot altogether?
newbie
Activity: 17
Merit: 0
Sorry for the offtopic question, but I would really appreciate your answer/advice. So, what kind of batteries did you purchase for your motherboard? I realized I need to change mine as well cause the old ones are dead. I started googling and found these LR1130. Will they fit? I’m just a noob in computers...(
newbie
Activity: 20
Merit: 1
Update

I am really bugged by this rig. I found out that the problem is not exactly plugging the second gpu onwards, it is actually with 7 of 8 gpus that came with the rig. I plugged other two gpus that I had and they worked fine along the only gpu of the 8 that worked.

What is bugging me is that when I got the rig, there were only one gpu plugged directly on the mobo and there were other 7 gpus plugged in risers. And these 7 gpus are the ones that are not working.

I'm still trying to understand what is happening, trying different arrangements. But I'm guessing there is a good chance that I'll return the rig to my friend...
full member
Activity: 1275
Merit: 141
This error is very common and it's said to be an issue with graphics drivers, use DDU to uninstall the graphics drivers and download a latest AMD driver and see if it works, also don't forget to activate the compute mode in the driver settings

I dont think HiveOS built on linux uses DDU...but thanks for the windows advise.
And that compute problem is again a windows problem only.
member
Activity: 210
Merit: 14
This error is very common and it's said to be an issue with graphics drivers, use DDU to uninstall the graphics drivers and download a latest AMD driver and see if it works, also don't forget to activate the compute mode in the driver settings
newbie
Activity: 20
Merit: 1
i had the issue on my b250 mbo, hive doesnt like the gpus to be in the x16 pcie port
all the others worked well


Good to know. I'll try to use riser on gpu0 as well...
jr. member
Activity: 115
Merit: 4
i had the issue on my b250 mbo, hive doesnt like the gpus to be in the x16 pcie port
all the others worked well
newbie
Activity: 20
Merit: 1
Not all boards have all those options.

If you try putting gpu1, 2 or 3 in place of gpu 0. Does only the gpu 0 still mine?

I would test each gpu as gpu0 make sure they all work.

Then lets talk about how they are connected?

Risers I assume?

You can also test each riser with the gpu0 and good gpu.

Let me know...if the gpus work its just a matter of figuring out why it doesnt see the others.

Oh and make sure not to connect a gpu/riser from 2 different psu.  I see you have 2.  So all connection of power to motherboard are off psu 1....and gpu also connected to psu 1 will need its riser powered by psu1.  gpus on risers are ok to power off psu2...but be sure the riser gets its power from psu2 also.  Assume a y-cable to sync the psus to start together. 



On my tests, the gpu0 were always the same. But the gpu 1 and gpu 2 I tried different gpus, always with the same result.

About risers... gpu0 is directly on mobo. Gpu 1 onward with risers. I tried to use new risers with the same result.

About power supply... This rig have two psu (750w), but first there is only one plugged on the mobo with total of 3 gpus. So... no mixing psu cables.

What I haven't tried yet is to change the gpu0. I'll try that and come back to report.
full member
Activity: 1275
Merit: 141
Not all boards have all those options.

If you try putting gpu1, 2 or 3 in place of gpu 0. Does only the gpu 0 still mine?

I would test each gpu as gpu0 make sure they all work.

Then lets talk about how they are connected?

Risers I assume?

You can also test each riser with the gpu0 and good gpu.

Let me know...if the gpus work its just a matter of figuring out why it doesnt see the others.

Oh and make sure not to connect a gpu/riser from 2 different psu.  I see you have 2.  So all connection of power to motherboard are off psu 1....and gpu also connected to psu 1 will need its riser powered by psu1.  gpus on risers are ok to power off psu2...but be sure the riser gets its power from psu2 also.  Assume a y-cable to sync the psus to start together. 

newbie
Activity: 20
Merit: 1
Make sure you have these set on the bios Smiley


    set VTd INTEL VIRTUALIZATION to DISABLE.
    set ONBOARD AUDIO/SOUND (AZALIA) to DISABLE.
    set IEEE1394 to DISABLE.
    set PCI-E SUBSYSTEM/LANES to x8/x4/x4
    set ONBOARD GRAPHICS to DISABLE.  (optional depending on OS you running)
    set PCI-E GENERATION to GEN2.
    set CPU FREQUENCY SCALING to DISABLE.
    set CPU PERFORMANCE MODE to ENABLE.
    set FAST BOOT to DISABLE.
    set CSM to ENABLE.




Tried almost everything you said. Couldn't find the IEEE1394 option, also couldn't find the 8x/4x option for the lanes and CPU frequency scaling. The rest is like you sugested.
full member
Activity: 1275
Merit: 141
Make sure you have these set on the bios Smiley


    set VTd INTEL VIRTUALIZATION to DISABLE.
    set ONBOARD AUDIO/SOUND (AZALIA) to DISABLE.
    set IEEE1394 to DISABLE.
    set PCI-E SUBSYSTEM/LANES to x8/x4/x4
    set ONBOARD GRAPHICS to DISABLE.  (optional depending on OS you running)
    set PCI-E GENERATION to GEN2.
    set CPU FREQUENCY SCALING to DISABLE.
    set CPU PERFORMANCE MODE to ENABLE.
    set FAST BOOT to DISABLE.
    set CSM to ENABLE.


newbie
Activity: 20
Merit: 1
I just got a used rig from my friend that has been offline for 2/3 years (he said it worked pretty fine back then). It consists of:

- 8x RX580 8gb PowerColor
- Asus Prime Z270-AR
- Intel Pentium® [email protected]
- 4gb of ram (2133mhz)
- 2x Corsair CX750m power supply
- Sandisk 16gb flash drive
- Hive OS (latest version)

When I run only one gpu, HiveOS starts fine. When I plug the second GPU onward, I start to get the error "UVD not responding, trying to restart the VCPU" as can be seen on the image (https://imgur.com/a/JTGAqMw).

I tried to change the PCI-e Auto, Gen1 and Gen2 (only didn't try Gen3). Also tried to edit /etc/default/grub to change the line to GRUB_CMDLINE_LINUX=“radeon.modeset=nomodeset” (also tried radeon.modeset=0 and just "nomodeset").

Not sure if there is another config that can be made to solve this.

Any clued to help!? Ty a lot!

USEFULL INFO

 - I’m testing with only a single PSU and 2 or tops 3 gpus, so the power cables come all from the same PSU.
 - All gpus are with original factory BIOS.

UPDATE:

Tests already made, without managing to boot more than two gpus:

 - Motherboard battery change: done.
 - Change the riser of the second gpu for a new one: done.
 - Change from 4gb of ram to 8gb of ram: done.
 - Update the BIOS: done.
 - Config in the BIOS:
     - System Agent (SA) Configuration > VT-d > Disabled: done.
     - System Agent (SA) Configuration > DMI/OPI Configuration > DMI Max Link Speed ) Gen2: done
     - System Agent (SA) Configuration > PEG Port Configuration > PCIEX16_1 > Gen2: done.
     - System Agent (SA) Configuration > PEG Port Configuration > PCIEX16_2 > Gen2: done.
     - System Agent (SA) Configuration > PEG Port Configuration > PCIe Spread Spectrum Clocking > disabled: done.
     - PCH Configuration > PCI Express Configuration > PCI Speed > Gen2: done.
     - Onboard Devices Configuration > HD Audio Controller > disabled: done.
     - Onboard Devices Configuration > M.2_1 Configuration: [Auto][SATA mode][PCIE mode] > PCIE Mode: done.
     - Onboard Devices Configuration > M.2_1 Bandwodth Configuration > X4: done.
     - APM Configuration > Restore AC Power Loss > Power ON: done.
     - Boot > Fastboot > Enable/Disable (tried both): done.
     - Boot > Post Delay Time > 0 sec: done.
     - Boot > Above 4G Encoding > Enabled: done.
Jump to: