Pages:
Author

Topic: 2x6970's Crashing Repeatedly with GUIMiner - page 3. (Read 7735 times)

sr. member
Activity: 1470
Merit: 428
January 17, 2012, 12:42:37 PM
#42
Im gonna ask again; are you running both GPU-Z and Afterburner? If you are, that would explain the extreme temperatures and crashing. ATI cards have a bug, when 2 apps are polling for (VRM only?) temps, it can cause vcore to spike up to over 1.6v. I know HD5000 cards are affected by this (I killed one myself because of this), I havent seen confirmation 6000 series are affected, but it wouldnt surprise me.

Why dont you try not running any of these monitoring apps, not even CCC and just use cgminer, and see what happens? Or guiminer if you must

I'm sorry for missing your question before. I've been running just MSI Afterburner.

So I reset everything to stock and closed MSI Afterburner. Then I ran the miner with GPU 2(formerly known as GPU 1 with the issues) and the display driver still crashed and restarted. I'm not sure how to not run CCC, I just left it in the system icon tray.
hero member
Activity: 518
Merit: 500
January 17, 2012, 02:44:57 AM
#41
Im gonna ask again; are you running both GPU-Z and Afterburner? If you are, that would explain the extreme temperatures and crashing. ATI cards have a bug, when 2 apps are polling for (VRM only?) temps, it can cause vcore to spike up to over 1.6v. I know HD5000 cards are affected by this (I killed one myself because of this), I havent seen confirmation 6000 series are affected, but it wouldnt surprise me.

Why dont you try not running any of these monitoring apps, not even CCC and just use cgminer, and see what happens? Or guiminer if you must
sr. member
Activity: 1470
Merit: 428
January 16, 2012, 10:40:04 PM
#40
Ok, so I switched slots, So GPU 1 is now GPU 2 and vice versa. So instead of the computer locking up when GPU 2 is running at 99% load at stock clock speeds, the video driver crashes and i get a notification saying that my driver crashed and was restarted. This is a step away from my computer crashing, now it's the driver. :/

Any ideas ladies and gentlemen?
sr. member
Activity: 1470
Merit: 428
January 16, 2012, 02:27:47 PM
#39
Alright, I just switched the cards and they both run at stock speeds without crashing. I have no idea anymore. Problem solved so far. I'll post if I have any unforeseen issues later.
donator
Activity: 1218
Merit: 1079
Gerald Davis
January 16, 2012, 01:59:52 PM
#38
Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?
No I can't. Sad

OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
I'm not familiar with PSU's at all. I have 1 video card plugged into the PCI-E slot on the PSU, and the other video card plugged into another PCI-E slot on the PSU. Wouldn't that distribute the load?

Any suggestions on how to test this would be appreciated.

It can be more complicated than that.

Your 6970 has 2x power connectors.  

Your PSU has 4 12V rails.  Each rail is only powerful enough to support 1 6970.  If you have both 6970s on the same rail and/or 6970 and mb on the same rail it will overload that rail and make the system unstable.

So you want it hooked up like this

Motherboard & other 12V loads - rail 1
First 6970 - rail 2
Second 6970 - rail 3

Figuring out which connector goes with what rail can be "tricky".  They often are poorly marked.  Looking at the wattage sticker on PSU, any documentation that came with it, and any specs provided online may help.
sr. member
Activity: 1470
Merit: 428
January 16, 2012, 01:56:09 PM
#37
Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?
No I can't. Sad

OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
I'm not familiar with PSU's at all. I have 1 video card plugged into the PCI-E slot on the PSU, and the other video card plugged into another PCI-E slot on the PSU. Wouldn't that distribute the load?

Any suggestions on how to test this would be appreciated.
full member
Activity: 210
Merit: 100
January 16, 2012, 05:06:55 AM
#36
OP, read this article:
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=246

The PSU reviewed there is build on the same platform as yours. They are both manufactured by Andyson according to the same spec. The only difference are stickers, just like with reference GPUs.

Although it might come as a shocker, contrary to what the label might imply, your PSU is NOT capable of delivering 1200W in continuous mode.

Moreover, your PSU has all its 12V power divided into four rails, each of them limited to 40A (480W).
You need to take very good care when loading up these rails.
If you connect your PSU in such a way that one rail becomes overloaded, as the PSU heats up and as a loses some efficiency (so-called de-rating), it might shut off on you.

This is a half-decent PSU but the rail distribution makes it quite hard to connect it the right way.
You'll need to read the manual and figure out which connectors to use to distribute the load evenly across the rails.
hero member
Activity: 518
Merit: 500
January 16, 2012, 02:34:49 AM
#35
Even though on paper your PSU is certainly powerful enough, it still sounds like a power delivery issue to me. Can you try another one?
sr. member
Activity: 1470
Merit: 428
January 15, 2012, 11:24:22 PM
#34
I am also having problems with 6970, I stopped running 3 of them and instead left one for mining.  It is way too hot no matter how you undervoltage it etc.  I think the 5xxx series are better for mining.
I think you misunderstand, my problem is not a temperature problem. My problem is that when I have two GPU's plugged in, one of them crashes at core clock speeds within seconds of usage going above 80% way before temps get out of the 70's.
hero member
Activity: 756
Merit: 500
January 15, 2012, 11:15:26 PM
#33
I am also having problems with 6970, I stopped running 3 of them and instead left one for mining.  It is way too hot no matter how you undervoltage it etc.  I think the 5xxx series are better for mining.
sr. member
Activity: 1470
Merit: 428
January 15, 2012, 11:07:17 PM
#32
81C is too high for 24/7 operation.  Are you downclocking the memory?  
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

Installed or mining.

GPU 1 crashes if GPU2 is idle or under full load?
What is the VRM temp (not core temp) on GPU1 when it crashes?

-installed, and/or mining. Even if they are crossfired(I did this to test mining stability) GPU 1 still crashed.
-GPU 2 can be idle or under full load, it does not matter at all, just as long as it is plugged in the issues will occur. I have not tried switching the GPU's around. (putting GPU 1 in GPU 2's spot and visa versa) but I have tested them 1 card at a time in each slot and it worked fine for both at stock settings.
-How can I determine VRM temp of GPU 1? And what is VRM?

I am also using MSI Afterburner to manage/monitor these GPU's. I believe someone asked that earlier and I failed to answer. Crashing occurs when the cards have different and identical settings as well.
donator
Activity: 1218
Merit: 1079
Gerald Davis
January 15, 2012, 10:52:03 PM
#31
81C is too high for 24/7 operation.  Are you downclocking the memory? 
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

Installed or mining.

GPU 1 crashes if GPU2 is idle or under full load?
What is the VRM temp (not core temp) on GPU1 when it crashes?
sr. member
Activity: 1470
Merit: 428
January 15, 2012, 10:48:47 PM
#30
81C is too high for 24/7 operation.  Are you downclocking the memory?  
Everything is downclocked. Core voltage(without crashing), AUX voltage(without crashing), Memory Frequency(without crashing). I've already come to the conclusion that my case is at fault, causing the the extreme GPU temps because of poor airflow. I'm buying a new case ASAP.

But temperatures aren't the issue for me right now, why won't my GPU 1 run at stock settings when GPU 2 is installed?

My miner crashes GPU 1 when the clock is at stock speeds, my games crashes the GPU at core clock stock speeds when it's usage is above 80% as seen in Skyrim. The GPU runs perfectly at stock when I don't have another GPU plugged in. So what is the issue?
donator
Activity: 1218
Merit: 1079
Gerald Davis
January 15, 2012, 09:54:36 PM
#29
81C is too high for 24/7 operation.  Are you downclocking the memory? 
sr. member
Activity: 1470
Merit: 428
January 15, 2012, 12:35:33 PM
#28
Mobo - http://www.newegg.com/Product/Product.aspx?Item=N82E16813131655
PSU - http://www.newegg.com/Product/Product.aspx?Item=N82E16817152045
OS - Windows 7 64bit
CCC Driver - 11.12

Crossfire Bridge is installed because I will eventually need to crossfire for my games, but crossfire is disabled via CCC.

My Temps are GPU 1(81c) and GPU 2(70c) this is with GPU 1's clock lowered from stock settings until it stops crashing.

My PSU has more power than is required, it shouldn't be a problem unless it is faulty. :O

Like I said earlier, both cards run at stock speeds perfectly on there own in either PCI-e slot with either pair of my 8-pin connectors.

I appreciate all of your responses. Smiley I hope we can figure this out.
full member
Activity: 210
Merit: 100
January 15, 2012, 05:04:11 AM
#27
What you say sounds about right, P4
If the PSU doesn't feel up to snuff, OP might be getting a lot of power fluctuations resulting in a crash.

Request you post PSU specs, including the manufacturer and model info.
hero member
Activity: 518
Merit: 500
January 15, 2012, 04:03:01 AM
#26
SOunds like your powersupply isnt up to the task.
member
Activity: 106
Merit: 10
January 15, 2012, 03:17:03 AM
#25
So I've come to the conclusion that I can't run two cards plugged in at stock speeds. For some reason, GPU 2 can run at stock speeds, but GPU 1 crashes at stock speeds. The cards run perfectly at stock speeds when I only have one or the other plugged into the mobo. I've tested both PCI-e x16 slots individually and they both work and run each card at stock speeds without crashing, but when I have them both plugged in, GPU 1 CANNOT run at stock speeds for some reason.

Does anyone else think this is a driver issue? Because it doesn't seem to be a hardware issue.

Need more info...
 
Mobo? Have you updated your BIOS or checked for issues with your particular model?

What OS / Driver version are you using?

Crossfire bridge installed?  Are you manging both cards with Afterburner and using identical settings?

What are your temps like now (since you had the issue before)?
sr. member
Activity: 1470
Merit: 428
January 15, 2012, 12:16:26 AM
#24
So I've come to the conclusion that I can't run two cards plugged in at stock speeds. For some reason, GPU 2 can run at stock speeds, but GPU 1 crashes at stock speeds. The cards run perfectly at stock speeds when I only have one or the other plugged into the mobo. I've tested both PCI-e x16 slots individually and they both work and run each card at stock speeds without crashing, but when I have them both plugged in, GPU 1 CANNOT run at stock speeds for some reason.

Does anyone else think this is a driver issue? Because it doesn't seem to be a hardware issue.
sr. member
Activity: 1470
Merit: 428
January 02, 2012, 03:29:34 PM
#23
Ok, so I'm tweaking the settings on one 6970. So when I run it, after about an hour or so, I get a long poll IO error reported by GUIMiner. So when I wake up to check on the mining, it has stopped mining several hours ago.

Any ideas? Normally with a crash, it doesn't show up as long poll IO error, right?
Pages:
Jump to: