Pages:
Author

Topic: 4x Radeon HD 6990 (Read 10056 times)

full member
Activity: 148
Merit: 102
May 23, 2011, 12:37:00 PM
#30
BTW, EVGA makes an accessory called Power Boost. It just takes a 4 pin Molex and feeds it into an empty PCIe slot. Takes some of the load off the 24 pin ATX connection.



http://www.evga.com/products/moreInfo.asp?pn=100-MB-PB01-BR&family=Accessories%20-%20Hardware&sw=4
hero member
Activity: 518
Merit: 500
May 23, 2011, 12:31:38 PM
#29
Has anyone actually done this and can you confirm if it is plausible Huh
member
Activity: 112
Merit: 100
"I'm not psychic; I'm just damn good"
May 22, 2011, 09:15:02 PM
#28
Hi tread starter. I hope you don't mind me linking my tread to yours. I'm consolidating build discussions Smiley
legendary
Activity: 1260
Merit: 1000
May 20, 2011, 01:22:40 AM
#27
The problem I'm trying to work through now is whether or not a common ground is needed and if so, the best way to handle grounding two PSU's to a common ground if you wanted to use two cheaper PSUs to power a mining rig. 

http://www.frozencpu.com/products/11742/cpa-167a/ModRight_CableRight_Dual_Power_Supply_Adapter_Cable.html?tl=g2c413s1220&id=qNzMZhNn

This doesn't address the issue we are talking about here. That cable is used to power peripherals, not something directly attached to the motherboard, or at least I don't think it should be.  Using that to power an HD array or other power hungry devices that are NOT connected to the motherboards ground is perfectly fine... but when you try to connect two power supplies with two different ground potentials to the through the same card, you can have some serious consequences, up to and including a burnt out card.

See the other thread where we've been discussing this problem (The PCIe extender sale thread) for more information.
newbie
Activity: 42
Merit: 0
May 20, 2011, 12:41:50 AM
#26
A PCI-E slot provides up to 75W, each 8-pin connector up to 150W, amounting to 375W for a card. This is one of the reasons why GPUs of 6990 have to be underclocked (830MHz instead of 880MHz in 6970 and 6950) for stable performance.
newer ATX 2.3.1 PSU's had quite marginal 3.3V capacity[listed usually only peak and very optimistic] and 5970/6990 need about 5.2A per each card(?).
and yes, on cheap motherboard you can blow circuity, but not because over-consuming GPU, but inadequate/weak power circuitry on motherboard.
member
Activity: 112
Merit: 100
"I'm not psychic; I'm just damn good"
May 20, 2011, 12:38:51 AM
#25
Hi very insightful stuff written here. I'm building my first rig and this is what I have. I do not have prior experience building computers so could you guys give me some advice?

GPU: 2x Sapphire ATI HD 5970 2 GB
MOBO: ASRock M3A770DE ($60)
CPU: Sempron 140 ($39)
PSU: Durl=http://www.newegg.com/Product/Product.aspx?Item=N82E16817182072]PSU[/url] ($100)
RAM: Kingston 1 GB DDR3 1066
HDD: None/USB ($0)
CASE: None/Cheapassjunk (~$20)
OS: Linux Ubuntu ($0)

$982 for ~1.3 Ghash/s [1.324 Mhash/$]
Not sure about the power consumption but it cost $0.20/kWh

Will I get the problem you described? Should I get a Gold 80+ PSU and/or a better MOBO? Any recommendations on how I can further drive the cost down or any thing that I'm missing on (compatibility or whatever)?
newbie
Activity: 13
Merit: 0
May 20, 2011, 12:37:35 AM
#24
The problem I'm trying to work through now is whether or not a common ground is needed and if so, the best way to handle grounding two PSU's to a common ground if you wanted to use two cheaper PSUs to power a mining rig. 

http://www.frozencpu.com/products/11742/cpa-167a/ModRight_CableRight_Dual_Power_Supply_Adapter_Cable.html?tl=g2c413s1220&id=qNzMZhNn
legendary
Activity: 1260
Merit: 1000
May 20, 2011, 12:11:12 AM
#23
Ok, there's some misinformation in this thread Smiley

First off, HD's only draw between 4 to 5 watts.  Raptors and SCSI drives might draw a bit more, but it's not even 10w.  Moving to an SSD will save only a few watts and is completely ridiculous in this application.  If you're going to do that, just make a bootable USB key and boot into Linux that way.

Second people aren't seeing the issue because most boards that designed for 2x SLI, 3x SLI, 4x SLI or Cross/tri/quadfire area designed with the idea in mind that current draw on the board is going to be substantial, so they are beefed up. Usually, (but not always), anyone running such a beefy system will also have a beefy power supply with large gauge wires to handle the current draw.  In the cases we are seeing here, people are using commodity boards and cheap PSUs, which are explicitly NOT designed to handle these kinds of extreme loads... but since this is a fringe area of computing, the problem doesn't crop up that often.

Most low and mid-level boards are going to sag power wise with even 3 cards in the slots.  You can get away with probably 5870's, but if you add a 5970 or two to the board and expect it to run, you're going to have a surprise or flaky operation.  I know this from experience.  Using a high quality gamer board will alleviate this, but has a cost involved.  To solve this, splicing off and using MOLEX reduces or eliminates the draw on the board.  One of the problems I have seen, or at least this is my speculation, is that even if the board works for awhile with 3 5970's in it drawing off the MB, the VRMs will start to heat up and eventually fail if that much current is being drawn continuously.  Again, taking that power draw off the MB and putting it on the PSU, which is designed to handle far more load than 75W @ 4A+ is going to eliminate those problems, extend the life of your board and be far more stable.

The problem I'm trying to work through now is whether or not a common ground is needed and if so, the best way to handle grounding two PSU's to a common ground if you wanted to use two cheaper PSUs to power a mining rig. 
newbie
Activity: 56
Merit: 0
May 19, 2011, 03:44:32 PM
#22
will work only on linux as windows has 4 gpu cores limit
basically no different than http://blog.zorinaq.com/?e=42

This is not true.  There is a 4 GPU limit for Crossfire/SLI only.  I was running 5 GPUs last week.  You are essentially limited by your motherboard slots.  Plus power, room, heat, etc.
mrb
legendary
Activity: 1512
Merit: 1028
May 19, 2011, 11:22:26 AM
#21
Yes, they do indeed draw power from the PCIe connector, even though they have additional power connectors.

He is referring to the ATX power spec.  12v power for PCIe slots is provided by only two pins in the ATX power connector, rated for 6 amps each.  He was pulling 7.4 amps over each pin, which was melting the shroud and likely caused permanent damage to his motherboard.  It probably would have started a fire eventually if he hadn't been actively watching it.

His solution was to bypass the ATX plug (and motherboard) entirely by spicing his PCIe extenders to accept power directly from the power supply.

I understand what he did.  What I don't understand is why most people are NOT seeing such issues.

Because few people are running 4x5970 or 4x6990 with the PCIe slots power entirely coming from the 24-pin ATX connector. I spliced my PCIe extenders. ArtForz is doing something similar. foxmulder has extra power connectors on his SR-2. Etc.

You have to understand that specifically the 5970 and 6990 are the cards with the highest current consumption on the slot (I measured 4.1-4.3A at 12V). Most other cards have 2/3rd or less the consumption (eg. the 5870 only draws 3.2A at 12V from the slot).

member
Activity: 62
Merit: 10
May 19, 2011, 08:09:08 AM
#20
What do you, guys, think about this kind of boards to increase number of graphic cards in one system?
newbie
Activity: 11
Merit: 0
May 19, 2011, 06:57:13 AM
#19
A PCI-E slot provides up to 75W, each 8-pin connector up to 150W, amounting to 375W for a card. This is one of the reasons why GPUs of 6990 have to be underclocked (830MHz instead of 880MHz in 6970 and 6950) for stable performance.
hero member
Activity: 602
Merit: 500
May 19, 2011, 06:46:36 AM
#18
I would use a small and cheap SSD.  Save lots of power and very fast.  Most will new in memory anyway.  Static and small swap (with linux, perhaps no swap).

Just a side note, you save like 10Watts going from HDD to SSD, it's not like some super huge mega power saving device. The two main power draws in a system are the GPU and the CPU, fans/harddrives/whatever are minor additions.

GPUS definitely do draw power from the PCI-E bus, don't kid yourself there. Many newer boards however have power options designed to draw additional power from the PSU to the board however to offset an increased load. It's still pretty harsh on a system to run that much through it.
newbie
Activity: 11
Merit: 0
May 19, 2011, 06:32:35 AM
#17
Performance slightly decreases in standard tests (~10%), and while mining you need even less communication with the card. Consider this video:
http://www.linustechtips.com/ltt-videos/radeon-hd-6990-bandwidth-comparison-test-16x-vs-8x-vs-4x-3dmark-11-linus-tech-tips
newbie
Activity: 11
Merit: 0
May 19, 2011, 06:27:27 AM
#16
And, please, post the software you are using.
member
Activity: 62
Merit: 10
May 19, 2011, 02:03:11 AM
#15
WOWOWOWOW!!!
Plz post your pics. Also it's very interesting to know if PCIe 8x or 4x wouldn't be a bottleneck when measuring MHash/s.
Please describe your setup as much detailed as you can. Very appreciated.
newbie
Activity: 37
Merit: 0
May 19, 2011, 12:53:38 AM
#14
I currently running Quad 6990 on EVGA SR-2, i know it's overkill lol(my first mining rig mistakes), 2 PSU used, 1 Corsair AX 1200 for the barebone system + 2 6990, and silverstone strider gold 1000 for the next dual 6990, for the Hexa(Quad 6990) gpu setup on SR-2 you required additional 1x 6 pin connector plugged on the board as additional power to power up those pcie x16 lanes. Believe me it's not nice to run this monster on hot tropical places without additional cooling in your room, it output around ~7300 BTU heat.

The setup i used is open case with my own custom ghetto design  Grin, 2 deck of rack, 2 pcie extender for dual on the top, i will post the pics when i have the time.
member
Activity: 98
Merit: 10
May 18, 2011, 08:54:35 PM
#13
I would use a small and cheap SSD.  Save lots of power and very fast.  Most will new in memory anyway.  Static and small swap (with linux, perhaps no swap).
sr. member
Activity: 392
Merit: 250
May 18, 2011, 08:32:05 PM
#12
Also they will draw more power than specified via the PCIe slots, so you might kill your hardware this way.

Haha... no.  PCI bus limit is about 75W.  These have dedicated 8 pin PCI-E connectors.

--------------

The problem I see with 4 of these in a system is heat and the fact that 2000W = more than a 15A breaker will handle.  I'd say your best bet would be 2 rigs.

Plus if one goes down, everything isn't down.  Much better for reliability.

Shouldn't four run somewhere around 1600W?  Still, you make a valid point.  It is like running a hair dryer 24/7 Sad

Yes, 4 cards would run at about 1600W but I would spec another 300 to run the rest of the system, thus arriving at about 2000 watts. Since the largest power supply I've seen is 1500 watts it would have to be broken up between 2 power supplies anyway. 2 x 1000 watt supplies each running 2 cards, 1 powers the motherboard and the other powers any other accessories ... hard drive, fans, etc.
member
Activity: 98
Merit: 10
May 18, 2011, 08:02:15 PM
#11
Also they will draw more power than specified via the PCIe slots, so you might kill your hardware this way.

Haha... no.  PCI bus limit is about 75W.  These have dedicated 8 pin PCI-E connectors.

--------------

The problem I see with 4 of these in a system is heat and the fact that 2000W = more than a 15A breaker will handle.  I'd say your best bet would be 2 rigs.

Plus if one goes down, everything isn't down.  Much better for reliability.

Shouldn't four run somewhere around 1600W?  Still, you make a valid point.  It is like running a hair dryer 24/7 Sad
Pages:
Jump to: