Pages:
Author

Topic: Any word on amd vega hash rates? - page 65. (Read 202725 times)

newbie
Activity: 4
Merit: 0
November 16, 2017, 03:17:45 AM

My Vega 56's are consistently hitting 37-39 MH/s
My Vega 64's are consistently hitting around 43.5 MH/s


Dang nice. Have you flashed your 56 to the 64 BIOS? I heard they should has about the same after that. And very curious of your power draw per card. I know HWinfo/GPUz is not reliable, but what do they say your cards are each pulling?

Which version of WIN10 did you go with? I've got 4 56's coming tomorrow and setting up win machine (my current rigs are all Ubuntu for RX470/570s) so kinda restarting with my knowdledge base....

I'm using stock BIOS on all cards.  I'm running WIN10 Pro... not sure what build, I'll have to look.
legendary
Activity: 1510
Merit: 1003
November 16, 2017, 02:32:17 AM
Anybody having issues with monitoring the health of their rig ?

I an experiencing continuous lock-ups in cast-xmr (and stak for that matters) when I'm running hwinfo64 or gpu-z to monitor the health of the rig.

It's imperative to know temperatures on the hbm and monitor fan speeds.

4 GPU + single PSU is OK. On 8GPU + 2PSU the lockups are frequent (every hour - two).

Any other options ?
hwinfo64 works fine with vega, no speed decrease or lockups. Just disable GPU I2C Support in settings.

I've been reading through the threads trying to glean a little knowledge and it's things like this that light me up.  

I've got the latest stable hwinfo64 5.60-3280.  When simply disabling GPU I2C support (that single check-box), my hash-rate drops.  Any other options I should be adding/dropping?  ... I think that it might be working now if I keep hwinfo64 open, reset my vegas, and then check temps.  Is that what you mean?  

Also, since talking temps, you said you set target to 50.  Is that via Wattman in Radeon Settings?  For some reason I was under the impression that the temp targets in wattman weren't effective and that it all had to be done manually.  Yes, I'm talking LC here.  I'm also interested in keeping fans down if possible. You said hbm temp 65-67 C.  Would you say over 70 is dangerous?  

Hi!
I use OverdriveNtool to set target temp.
I currently clock HBM high - to 1150mhz, so I keep memory temp below 65 with proper fan settings for that reason.


As for hwinfo64 issues you have ... well ... I currently use hw64_559_3270 ... didn't try most recent yet.
Make sure that other stuff isn't responsible for hashrate drop. It can be windows power saver options or other hardware related utilities (like nvidia utilities in task scheduler in my case before)
full member
Activity: 196
Merit: 100
November 16, 2017, 01:13:25 AM

My Vega 56's are consistently hitting 37-39 MH/s
My Vega 64's are consistently hitting around 43.5 MH/s


Dang nice. Have you flashed your 56 to the 64 BIOS? I heard they should has about the same after that. And very curious of your power draw per card. I know HWinfo/GPUz is not reliable, but what do they say your cards are each pulling?

Which version of WIN10 did you go with? I've got 4 56's coming tomorrow and setting up win machine (my current rigs are all Ubuntu for RX470/570s) so kinda restarting with my knowdledge base....
full member
Activity: 305
Merit: 148
Theranos Coin - IoT + micro-blood arrays = Moon!
November 15, 2017, 11:44:15 PM
I read something about a maximum of four vega gpu:s per system. Is that correct ?
Is it the driver that dont support more then four cards ?

No, that is not correct.
newbie
Activity: 5
Merit: 0
November 15, 2017, 11:41:06 PM
Anyone experience issues with manually resetting their Vega cards in device manager?

I  cannot re-enable my Vega 56 in device manager after disabling.  Toggling HBCC in Wattman causes the driver to crash and the card is missing from Wattman.  The only way to fix this is to reboot the rig.

Using the latest Aug23 block chain drivers in Windows 10 x64 with latest creators update 1709 (have tried previous Windows 10 builds with similar results)

The only driver that allows me to reset the cards is the latest 17.11.1 driver but from what I can tell this is not the optimal driver for mining.

Only getting a max hash rate of 1300H/s using cast-xmr from a single Vega 56 when others are getting much higher.

Spent 2 days so far with no luck.

My system specs
Motherboard: Biostar TB250-BTC
Memory: 1x8GB
CPU: Celeron G3950
Video cards: Just a single Vega 56 + integrated GPU
Virtual memory: Fixed at 64GB

Any help appreciated.

newbie
Activity: 4
Merit: 0
November 15, 2017, 11:37:04 PM
Mythic,

I'm very new on the mining scene and by standing on the shoulders of giants, I have just recently constructed the following rig:

B250 Mining Expert MB
8 gigs RAM
1x60 gig SSD
2x EVGA 1600 Watt P2 power supplies
5x Vega 64
5x Vega 56

I just wanted to sincerely thank you for your post -- your method of resetting the cards was a true life saver.
I can confirm I'm running 10 cards on a single board at full mining speeds.

I'm currently mining ETH (I just happened to have a config set up on a thumb drive and it was easy to start testing).

With stock Vega drivers released on 11/13 (not blockchain) and GPUs optimized as suggested in several other videos and posts on the net, here are my current results:

My Vega 56's are consistently hitting 37-39 MH/s
My Vega 64's are consistently hitting around 43.5 MH/s

Total of about 407 MH/s

I can't tell you how much voltage at the wall -- waiting on a kill-a-watt meter to come in

I plan to move over to Monero as soon as tomorrow, but that's another topic entirely -- I suspect I'm going to have to reinstall to use a bigger drive for virtual memory, but anyway.....


Anyway, thanks again!   You are definitely The Man.  I'm broke at the moment, but once I start making a bit o cash, don't be surprised if you find a few funds directed your way.



M.





I hadn't originally planned on putting this out there, but some other asshole decided to charge a fee for this info. Fuck that guy. Here's a fix for reduced hashrate and a method for running 4+ Vega at full speed while mining XMR.

Basic Users:

1. Use the Blockchain drivers.
2. Open up your device manager.
3. Open up the "Display Adapters" dropdown.
4. Right click on one of your Vega's.
5. Disable it.
6. Wait a few seconds.
7. Right click on the Vega you just disabled.
8. Re-enable it.
9. Repeat steps 4-8 for the rest of your cards.
10. Enjoy mining at full speed with as many Vega as you damn well please. Probably.

Advanced Users:

The previously described process is a bit of a pain in the ass. Let's automate it.

New Procedure

1. Get your hands on the Windows Device Console (Devcon.exe). This will let you disable/enable devices from console.
2. Create a batch file in the same folder as devcon.exe with the following lines in it:
Code:
cd %~dp0
timeout /t 5
devcon.exe disable "PCI\VEN_1002&DEV_687F"
timeout /t 5
devcon.exe enable "PCI\VEN_1002&DEV_687F"
3. This will selectively disable/enable all the Vega cards you currently have in your system. Thanks to bytiges for noting that killing all the display adapters might not be a good idea for those using iGPU.
4. Run the batch file as an administrator at login, or whenever you lose hashrate on your cards. Enjoy never having to toggle the HBCC switch again.

Outdated Procedure

1. Get your hands on the Windows Device Console (Devcon.exe). This will let you disable/enable devices from console.
2. Run the command devcon.exe FindAll * to generate a list of all the currently attached devices and their hardware IDs.
3. Search your newly generated list for "Radeon RX Vega." You should find something like this:
Code:
PCI\VEN_1002&DEV_687F&SUBSYS_0B361002&REV_C3\6&3AAC35E3&0&000000E7 : Radeon RX Vega
4. The part to the left of the colon is what you need. Shocking, I know. Put that thing into a batch file that looks something like this:
Code:
@echo on
cd C:\Users\Mythic\Desktop\Startup
timeout /t 5
devcon.exe disable PCI\VEN_1002&DEV_687F&SUBSYS_0B361002&REV_C3\6&3AAC35E3&0&000000E7
timeout /t 15
devcon.exe enable PCI\VEN_1002&DEV_687F&SUBSYS_0B361002&REV_C3\6&3AAC35E3&0&000000E7
timeout /t 15
OverdriveNTool.exe -p0Vega0 -p1Vega1 -p2Vega2 -p3Vega3 -p4Vega4 -p5Vega5 -p6Vega6
timeout /t 10
cd C:\Users\Mythic\Desktop\xmr-stak-amd
xmr-stak-amd.exe
5. Run your batch file at startup as an administrator. Enjoy never having to toggle the HBCC switch again.

FAQ:

Q: So, Mythic, how exactly does this enable HBCC on cards 5-7?

A: It doesn't! HBCC Does nothing to improve your hashrate! Turns out the drivers are just shit and are bugged when Windows first boots up. Who would have guessed?

Q: But Mythic, does that mean I don't need 16 GB of ram to run my cards at full speed?

A: Correct! 4 GB of ram is completely sufficient!

Q: If I wanted to donate some pizza money to the broke ass college student that typed this up, how would I do so?

A: You ask the best questions! I doubt I'll get anything before more well known people/websites copy everything I just said (with no credit, obviously), but here are some addresses just in case.
          XMR: 42e8AWjcirkBDzCNnSjPxeGJeht71kFfWcoxxCWsxe8HZqP29NruDsxcvVSjbKw17AUDepopK7ZYCUn mRvcGS9kBT5XWhMQ
          ETH: 0x74Ed2CA095Dd3aE98D88d5ca1dDb77E752152938
member
Activity: 75
Merit: 10
November 15, 2017, 11:18:23 PM
Any ideas of what could be being triggered with the disable/enable method?
I'm clearly much less versed on the inner workings than you, but I always assumed it was simply an issue with initializing the driver on Windows boot vs resetting when all the subsystems are fully loaded.

Although just as a random question, is the reset still required when a system does not have an iGPU? In other words, is it an interaction with the multiple display drivers?
newbie
Activity: 49
Merit: 0
November 15, 2017, 10:56:39 PM
So... HBCC isn't what causes the higher hashrate.

Any ideas of what could be being triggered with the disable/enable method?

I'm afraid that this can be somewhat of a bug on the Beta driver that will never be ported to the Gaming driver.

Disabling the card would clear the frame buffer, but beyond that I’m not sure what else a re-initialization would do…

Anyone up for a brainstorm?

full member
Activity: 675
Merit: 100
November 15, 2017, 10:25:30 PM
So is the august blockchain driver still required? Or has 17.11.1 update finally fixed all the issues when toggling compute? Haven't had time to play around with drivers lately Sad

I'd like to know that as well.  My concern with the new drivers is that I will have to keep re-turning Compute mode on every time Wattman crashes.
newbie
Activity: 26
Merit: 0
November 15, 2017, 08:27:27 PM
Anybody having issues with monitoring the health of their rig ?

I an experiencing continuous lock-ups in cast-xmr (and stak for that matters) when I'm running hwinfo64 or gpu-z to monitor the health of the rig.

It's imperative to know temperatures on the hbm and monitor fan speeds.

4 GPU + single PSU is OK. On 8GPU + 2PSU the lockups are frequent (every hour - two).

Any other options ?
hwinfo64 works fine with vega, no speed decrease or lockups. Just disable GPU I2C Support in settings.

I've been reading through the threads trying to glean a little knowledge and it's things like this that light me up. 

I've got the latest stable hwinfo64 5.60-3280.  When simply disabling GPU I2C support (that single check-box), my hash-rate drops.  Any other options I should be adding/dropping?  ... I think that it might be working now if I keep hwinfo64 open, reset my vegas, and then check temps.  Is that what you mean? 

Also, since talking temps, you said you set target to 50.  Is that via Wattman in Radeon Settings?  For some reason I was under the impression that the temp targets in wattman weren't effective and that it all had to be done manually.  Yes, I'm talking LC here.  I'm also interested in keeping fans down if possible. You said hbm temp 65-67 C.  Would you say over 70 is dangerous? 
legendary
Activity: 2172
Merit: 1401
November 15, 2017, 05:39:52 PM
So is the august blockchain driver still required? Or has 17.11.1 update finally fixed all the issues when toggling compute? Haven't had time to play around with drivers lately Sad
newbie
Activity: 84
Merit: 0
November 15, 2017, 05:02:31 PM
I made an script to run devcon, overdriventool and finally cast xmr. I set the Task Scheduler to run this script at statup whether user is logged or not. When booting the pc and not login the script will run in background, without opening the cmd prompt.
This result in a consumption of around 1015W in a 6x Vega 64 rig.
If I run the script manually, so it shows the cmd prompt, the power consumption is  between 100 and 150W higher.
I don't understand why this happen. I would like to know if you can replicate this results.
Of course, the downside of this is that you cannot know the hashrate you are getting.

Interesting... Please post it, when you figured it out what is happening.

mod.: I've tried it, but no difference for me.
full member
Activity: 196
Merit: 100
November 15, 2017, 04:13:05 PM
I bought only one Vega 56 in order to test it, 1940 H/s stable on XMR since 3 days now.
Damn, if only I bought more when it was on sale...

Seriously - they are $100 more each now... Sad
newbie
Activity: 31
Merit: 0
November 15, 2017, 04:05:52 PM
I bought only one Vega 56 in order to test it, 1940 H/s stable on XMR since 3 days now.
Damn, if only I bought more when it was on sale...
newbie
Activity: 6
Merit: 0
November 15, 2017, 03:56:05 PM
I made an script to run devcon, overdriventool and finally cast xmr. I set the Task Scheduler to run this script at statup whether user is logged or not. When booting the pc and not login the script will run in background, without opening the cmd prompt.
This result in a consumption of around 1015W in a 6x Vega 64 rig.
If I run the script manually, so it shows the cmd prompt, the power consumption is  between 100 and 150W higher.
I don't understand why this happen. I would like to know if you can replicate this results.
Of course, the downside of this is that you cannot know the hashrate you are getting.


Devcon wasn't running properly.
full member
Activity: 1123
Merit: 136
November 15, 2017, 03:41:15 PM
Okay surprise surprise.. with two cards if I toggle HBC on card #1 it works just fine. If I toggle HBC on #2 (previously working) now card #2 disappears from Crimson.. it's always the "last" listed card in Crimson. This is quite interesting.
Dude, have you flashed your bios to the latest version?
Otherwise try another mobo.
Or play around with the PCI-E settings in bios. It sounds like there could be some inconsistencies there.
Yep flashed to the latest version. I've tried PCIe Gens 1, 2, and 3.

I'll be grabbing a Z370 mobo from Microcenter this afternoon.
Let us know how that works out. Which mobo did you use so far out of interest?
Oh fucking hell the Z370's ONLY support 8th gen CPUs even those the socket is the same. Fuck you Intel.

Currently using a Z170A MSI M5 board. Guess it's back to MC to try a Z270 board.

Man you just can't catch a break with your rig.  Cry
full member
Activity: 196
Merit: 100
November 15, 2017, 03:10:54 PM

Currently using a Z170A MSI M5 board. Guess it's back to MC to try a Z270 board.

I used the Z170A M5 for my two linux/470/570 rigs. Works flawlessly. But for VEGA was reading it's really finniky with boards. So I broke down and bought the slitghty more expensive (even more so on Amazon) Asrock H110 Pro+ board. I only need 4 of the 13 slots, but people reporting good results for Vega (even though the reviews people complain about it a lot).
hero member
Activity: 687
Merit: 502
November 15, 2017, 03:09:34 PM
I read something about a maximum of four vega gpu:s per system. Is that correct ?
Is it the driver that dont support more then four cards ?

No, building them with 6 all the time
Oh, thats nice to hear !
Do you have any powerdraw numbers to share ?
hero member
Activity: 1151
Merit: 528
November 15, 2017, 02:58:39 PM
Okay surprise surprise.. with two cards if I toggle HBC on card #1 it works just fine. If I toggle HBC on #2 (previously working) now card #2 disappears from Crimson.. it's always the "last" listed card in Crimson. This is quite interesting.
Dude, have you flashed your bios to the latest version?
Otherwise try another mobo.
Or play around with the PCI-E settings in bios. It sounds like there could be some inconsistencies there.
Yep flashed to the latest version. I've tried PCIe Gens 1, 2, and 3.

I'll be grabbing a Z370 mobo from Microcenter this afternoon.
Let us know how that works out. Which mobo did you use so far out of interest?
Oh fucking hell the Z370's ONLY support 8th gen CPUs even those the socket is the same. Fuck you Intel.

Currently using a Z170A MSI M5 board. Guess it's back to MC to try a Z270 board.
legendary
Activity: 1025
Merit: 1001
November 15, 2017, 02:58:19 PM
I read something about a maximum of four vega gpu:s per system. Is that correct ?
Is it the driver that dont support more then four cards ?

No, building them with 6 all the time
Pages:
Jump to: