Pages:
Author

Topic: Any word on amd vega hash rates? - page 64. (Read 202725 times)

newbie
Activity: 7
Merit: 0
November 16, 2017, 10:45:22 AM
With Vega 56, what can you do if you are using OverdriveN and HWinfo64 still reports a higher voltage than was set?

Keep in mind that the set voltage is still a bit dynamic, that's normal and wanted.

Depending on how low you want to go with your voltage you probably need one or both of a) editing reg entries b) flashing to 64 bios.

And don't forget to restart your devices.
hero member
Activity: 935
Merit: 1001
I don't always drink...
November 16, 2017, 10:37:30 AM
With Vega 56, what can you do if you are using OverdriveN and HWinfo64 still reports a higher voltage than was set?
newbie
Activity: 16
Merit: 0
November 16, 2017, 10:30:09 AM
Is anyone here running their Vegas on one 8 pin PCIe cable split into 2 6+2 connectors? I know daisy chaining can be bad but the cables that came with the PSU ( Corsair hx1000 ) are heavy duty compared to other PCIe cables with only one 6+2 connection.

I am running mine at the  2x8pin (technically 8 + 6+2pin)  cables that came with my corsairs. don't you have enough of these? with the rm1000x there are 4 of these per psu.  

Yeah I have 4x 8pin to 2x6+2pin. Up till now I've only been running 2 Vegas from the PSU using two cables per Vega. Just read a lot of horror stories regarding daisy chaining PCIe cables, although most of this is people using the splitters. I didn't find much info on wether it was safe to use one PCIe cable for each vega

I tried running a 56 off a splitter, seemed to work fine but it was just one.  Next I will try two 56s off the same cable.  Corsair CX750M.  I am mining CryptoNight though, not EthHash, which is very gentle on power compared to EthHash.

I really don't think trying to run two cards that should be pulling roughly 300w from a pcie 8-2 (rated at 150w, and the two PCIE slots at 75w each is going to net you a good result.)  I mean i get that the math slightly tracks.. it's just a lot of stress instead of getting 2 psu's or a properly sized one with the right connectors..    for me mining is about uptime availability and consistency.   if this is just a lab based test then godspeed.. but i'd be careful Smiley

Sorry if I misunderstood your intent Smiley
full member
Activity: 675
Merit: 100
November 16, 2017, 09:47:35 AM
Is anyone here running their Vegas on one 8 pin PCIe cable split into 2 6+2 connectors? I know daisy chaining can be bad but the cables that came with the PSU ( Corsair hx1000 ) are heavy duty compared to other PCIe cables with only one 6+2 connection.

I am running mine at the  2x8pin (technically 8 + 6+2pin)  cables that came with my corsairs. don't you have enough of these? with the rm1000x there are 4 of these per psu.  

Yeah I have 4x 8pin to 2x6+2pin. Up till now I've only been running 2 Vegas from the PSU using two cables per Vega. Just read a lot of horror stories regarding daisy chaining PCIe cables, although most of this is people using the splitters. I didn't find much info on wether it was safe to use one PCIe cable for each vega

I tried running a 56 off a splitter, seemed to work fine but it was just one.  Next I will try two 56s off the same cable.  Corsair CX750M.  I am mining CryptoNight though, not EthHash, which is very gentle on power compared to EthHash.
hero member
Activity: 1151
Merit: 528
November 16, 2017, 09:41:02 AM
Anyone experience issues with manually resetting their Vega cards in device manager?

I  cannot re-enable my Vega 56 in device manager after disabling.  Toggling HBCC in Wattman causes the driver to crash and the card is missing from Wattman.  The only way to fix this is to reboot the rig.

Using the latest Aug23 block chain drivers in Windows 10 x64 with latest creators update 1709 (have tried previous Windows 10 builds with similar results)

The only driver that allows me to reset the cards is the latest 17.11.1 driver but from what I can tell this is not the optimal driver for mining.

Only getting a max hash rate of 1300H/s using cast-xmr from a single Vega 56 when others are getting much higher.

Spent 2 days so far with no luck.

My system specs
Motherboard: Biostar TB250-BTC
Memory: 1x8GB
CPU: Celeron G3950
Video cards: Just a single Vega 56 + integrated GPU
Virtual memory: Fixed at 64GB

Any help appreciated.


I am experiencing this exact issue as well with a different board. No resolution so far.
newbie
Activity: 5
Merit: 0
November 16, 2017, 07:35:36 AM
I'm actually using the IGPU for display instead of the Vega. I have tried an X370 AM4 board as well as an older Z77 with similar results. The X370 had a Ryzen 1700 on it so no onboard GPU.

I have a feeling it could be a BIOS issue on the TB250. May also look at picking up an Asus Z270-P.  The TB250 has been flawless with 7 x Nvidia cards.


Anyone experience issues with manually resetting their Vega cards in device manager?

I  cannot re-enable my Vega 56 in device manager after disabling.  Toggling HBCC in Wattman causes the driver to crash and the card is missing from Wattman.  The only way to fix this is to reboot the rig.

Using the latest Aug23 block chain drivers in Windows 10 x64 with latest creators update 1709 (have tried previous Windows 10 builds with similar results)

The only driver that allows me to reset the cards is the latest 17.11.1 driver but from what I can tell this is not the optimal driver for mining.

Only getting a max hash rate of 1300H/s using cast-xmr from a single Vega 56 when others are getting much higher.

Spent 2 days so far with no luck.

My system specs
Motherboard: Biostar TB250-BTC
Memory: 1x8GB
CPU: Celeron G3950
Video cards: Just a single Vega 56 + integrated GPU
Virtual memory: Fixed at 64GB

Any help appreciated.



I have this problem occasionally now I have swapped to the same MB as you never had any issues with the z270 switching on the HBCC. I found that the problem is switching on the HBCC on a card that your using for display, so whatever card has hdmi in causes BSOD/freezing. Have you tried resetting cards whole.using onboard graphics?
sr. member
Activity: 736
Merit: 262
Me, Myself & I
November 16, 2017, 07:29:01 AM
You can use it ONLY if You can assure that Your Vegas never hit more than 50% of TDP.

EDIT: post was about using 8pin to 2x(6+2) PCIe splitters...
hero member
Activity: 1151
Merit: 528
November 16, 2017, 07:25:34 AM

Currently using a Z170A MSI M5 board. Guess it's back to MC to try a Z270 board.

I used the Z170A M5 for my two linux/470/570 rigs. Works flawlessly. But for VEGA was reading it's really finniky with boards. So I broke down and bought the slitghty more expensive (even more so on Amazon) Asrock H110 Pro+ board. I only need 4 of the 13 slots, but people reporting good results for Vega (even though the reviews people complain about it a lot).
Yes I have several of the M5's and they work amazing for 8 570s.

Thank you for this. It's available same day shipping for me. It'll be here this afternoon.

In the meantime I grabbed THREE different boards from MC to try them out as well.
newbie
Activity: 40
Merit: 0
November 16, 2017, 07:06:20 AM
Is anyone here running their Vegas on one 8 pin PCIe cable split into 2 6+2 connectors? I know daisy chaining can be bad but the cables that came with the PSU ( Corsair hx1000 ) are heavy duty compared to other PCIe cables with only one 6+2 connection.

I am running mine at the  2x8pin (technically 8 + 6+2pin)  cables that came with my corsairs. don't you have enough of these? with the rm1000x there are 4 of these per psu. 

Yeah I have 4x 8pin to 2x6+2pin. Up till now I've only been running 2 Vegas from the PSU using two cables per Vega. Just read a lot of horror stories regarding daisy chaining PCIe cables, although most of this is people using the splitters. I didn't find much info on wether it was safe to use one PCIe cable for each vega
member
Activity: 115
Merit: 10
November 16, 2017, 06:59:53 AM
Is anyone here running their Vegas on one 8 pin PCIe cable split into 2 6+2 connectors? I know daisy chaining can be bad but the cables that came with the PSU ( Corsair hx1000 ) are heavy duty compared to other PCIe cables with only one 6+2 connection.

I am running mine at the  2x8pin (technically 8 + 6+2pin)  cables that came with my corsairs. don't you have enough of these? with the rm1000x there are 4 of these per psu. 
full member
Activity: 224
Merit: 105
November 16, 2017, 06:52:31 AM
Same problem. How much ram do you have in that rig?
newbie
Activity: 40
Merit: 0
November 16, 2017, 05:29:51 AM
Anyone experience issues with manually resetting their Vega cards in device manager?

I  cannot re-enable my Vega 56 in device manager after disabling.  Toggling HBCC in Wattman causes the driver to crash and the card is missing from Wattman.  The only way to fix this is to reboot the rig.

Using the latest Aug23 block chain drivers in Windows 10 x64 with latest creators update 1709 (have tried previous Windows 10 builds with similar results)

The only driver that allows me to reset the cards is the latest 17.11.1 driver but from what I can tell this is not the optimal driver for mining.

Only getting a max hash rate of 1300H/s using cast-xmr from a single Vega 56 when others are getting much higher.

Spent 2 days so far with no luck.

My system specs
Motherboard: Biostar TB250-BTC
Memory: 1x8GB
CPU: Celeron G3950
Video cards: Just a single Vega 56 + integrated GPU
Virtual memory: Fixed at 64GB

Any help appreciated.



I have this problem occasionally now I have swapped to the same MB as you never had any issues with the z270 switching on the HBCC. I found that the problem is switching on the HBCC on a card that your using for display, so whatever card has hdmi in causes BSOD/freezing. Have you tried resetting cards whole.using onboard graphics?
member
Activity: 115
Merit: 10
November 16, 2017, 05:13:14 AM
I gave a try to Hellae's guide and mods yesterday. I can verify that all worked out pretty well (thanks to the good silicon I got I guess). But something bothers me ...

They are 2x 64 + 56 - all at 1408/1100 with his mod's voltages (think 905).

1) GPU-Z is showing 1V on all 3.
2) The things are damn hot. The full other rig I have, using these settings (but modded at 1220 GPU and then set to 1407 with overdriventool) are running 47-55 with up to 2800rpm. The 3 fresh cards are at 62-65 with 3200 RPM ....
3) Power usage is around 670-690W on the 3 card system. I kinda expected a bit better , its nowhere near the advertized values it seems.

Any idea what am I doing wrong here? What can be done to fix the issue? One of the 64s was running alone with the oldcomer's mod with overdrive tweaks at under 50C for a week on top of my desktop ... Now where did that 15C temp increase came from?

I've had the same bug. Try to set P7 in overdriveNtool to 910mV (or 920, 930). This change lowered the voltage and the consumption for me.

Gotta try that for sure . I guess it lowered the temps as well?
member
Activity: 115
Merit: 10
November 16, 2017, 05:12:23 AM
are you aware guys that you can run gatelessgate XMR on vegas if you add
Code:
"no-adl": true
for me it's actually more stable than the rest

Can you explain what is this please Smiley
sr. member
Activity: 857
Merit: 262
November 16, 2017, 04:54:32 AM
are you aware guys that you can run gatelessgate XMR on vegas if you add
Code:
"no-adl": true
for me it's actually more stable than the rest
newbie
Activity: 84
Merit: 0
November 16, 2017, 04:52:34 AM
I gave a try to Hellae's guide and mods yesterday. I can verify that all worked out pretty well (thanks to the good silicon I got I guess). But something bothers me ...

They are 2x 64 + 56 - all at 1408/1100 with his mod's voltages (think 905).

1) GPU-Z is showing 1V on all 3.
2) The things are damn hot. The full other rig I have, using these settings (but modded at 1220 GPU and then set to 1407 with overdriventool) are running 47-55 with up to 2800rpm. The 3 fresh cards are at 62-65 with 3200 RPM ....
3) Power usage is around 670-690W on the 3 card system. I kinda expected a bit better , its nowhere near the advertized values it seems.

Any idea what am I doing wrong here? What can be done to fix the issue? One of the 64s was running alone with the oldcomer's mod with overdrive tweaks at under 50C for a week on top of my desktop ... Now where did that 15C temp increase came from?

I've had the same bug. Try to set P7 in overdriveNtool to 910mV (or 920, 930). This change lowered the voltage and the consumption for me.
newbie
Activity: 40
Merit: 0
November 16, 2017, 04:50:07 AM
Is anyone here running their Vegas on one 8 pin PCIe cable split into 2 6+2 connectors? I know daisy chaining can be bad but the cables that came with the PSU ( Corsair hx1000 ) are heavy duty compared to other PCIe cables with only one 6+2 connection.
newbie
Activity: 10
Merit: 0
November 16, 2017, 04:37:58 AM
I also tried Hellaes Guide and got 1800h/s with Vega56 and Stock Bios. PC Idle at 70W and Load 215W so GPU should be arround ~145W.
If I try any of the 3 posted reg files an flashed Vega 64 Bios i get a green Screen at Windows 10 Start and cant get into Windows. Any Ideas why? Voltage to low maybe?
member
Activity: 115
Merit: 10
November 16, 2017, 04:24:36 AM
I gave a try to Hellae's guide and mods yesterday. I can verify that all worked out pretty well (thanks to the good silicon I got I guess). But something bothers me ...

They are 2x 64 + 56 - all at 1408/1100 with his mod's voltages (think 905).

1) GPU-Z is showing 1V on all 3.
2) The things are damn hot. The full other rig I have, using these settings (but modded at 1220 GPU and then set to 1407 with overdriventool) are running 47-55 with up to 2800rpm. The 3 fresh cards are at 62-65 with 3200 RPM ....
3) Power usage is around 670-690W on the 3 card system. I kinda expected a bit better , its nowhere near the advertized values it seems.

Any idea what am I doing wrong here? What can be done to fix the issue? One of the 64s was running alone with the oldcomer's mod with overdrive tweaks at under 50C for a week on top of my desktop ... Now where did that 15C temp increase came from?
newbie
Activity: 84
Merit: 0
November 16, 2017, 04:05:13 AM

My Vega 56's are consistently hitting 37-39 MH/s
My Vega 64's are consistently hitting around 43.5 MH/s


Dang nice. Have you flashed your 56 to the 64 BIOS? I heard they should has about the same after that. And very curious of your power draw per card. I know HWinfo/GPUz is not reliable, but what do they say your cards are each pulling?

Which version of WIN10 did you go with? I've got 4 56's coming tomorrow and setting up win machine (my current rigs are all Ubuntu for RX470/570s) so kinda restarting with my knowdledge base....

I'm using stock BIOS on all cards.  I'm running WIN10 Pro... not sure what build, I'll have to look.

Power consumption should be around 3000W :O
You can lower it below 2000W.
Pages:
Jump to: