Author

Topic: 4X 6990 on MSI 970A-G45 (Read 2569 times)

newbie
Activity: 42
Merit: 0
September 15, 2011, 01:11:42 PM
#16
Hi,


Unlikely you need that much power.  375 is TDP.  It is the max power the card is designed for.  You won't hit 375 in real life even at 100% GPU load 24/7.   On the 5970 I pull about ~260W from the wall per card.  3 cards running off a 1250W PS.  Total draw @ wall is normally around 930W.

Take a look at benchmarks for 6990 from various sites to get an idea of real world power draw.
http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/41404-amd-radeon-hd-6990-4gb-review-21.html

If you want to go w/ dual power supplies I would go with maybe 700W - 800W units (1400 - 1600W combined should be more than enough).  Of course a 2kW of power isn't going to hurt anything it just may mean you are overspending some.

Thanks for your comment but I know what TDP is.

So if you noticed I took TDP value in order to Scale the PSU, not a precise value.
Actually MB+CPU + etc will also not consume at Maximum possible levels.

Also almost all common commercial PSU's will be at their Top of efficiency at 50% Load ... hence the double 1000W ... I know ... I know .. it is only a 2% at most difference on the efficiency curve ... but to me it matters. Likely over a 5 year period pays for the price difference to non-optimal configuration ... and after those 5 years I am sure the PSU was not pushed to the limits so longevity is also an issue.

Also besides the PSU's being at their peak efficiency at 50% load there is also the issues of electric spikes on the Electrical installation you are running  your equipment.

My gear will not be in my home. It will be running on a industrial type of facility where electricity is much cheaper, there is 24/7 security, no noise for the neighbours  and so on ...

So apart from having peak protection gear on my electric circuit I must make sure  my PSU's are not at the very brink of taking 3500US$ /Each with them when a possible and un-controllable peak comes along.
The facility in question does have a lot of heavy power machines ... so all care is important.

If something I really do not care to pay a bit more for the PSU. It is one of the components I do like to bet in terms of proper dimension and quality.

Regards.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
September 14, 2011, 03:08:36 PM
#15
Quote
Your 100W out of 700W supply is completely wrong though.  If the draw @ the wall is less than ~50% (depends on model) then you are actually making the power supply LESS efficient and creating more wear, heat, and unnecessary power consumption.  Under loading a power supply that much is horrible inefficient and the excess heat is detrimental to longterm health of a powersupply.   A smaller powersupply would run cooler, safer, and more efficiently.

So what you're saying is that by letting my home computer run at idle (50W) with my 750W PSU, I'm actually doing more harm that loading it 80-100%?  I don't buy that.  I agree the efficiency might be lower at lower loads, but the longevity of the device isn't going to be negatively affected.

Regardless, I think we're in agreement about high loads--you want to keep the actual power consumption to about 50-70% of the PSU rating, unless you don't mind changing the PSUs every couple months. They're not guaranteed to fail, but PSU problems tend to show up as problems with other components, and thus are a complete pain to diagnose.  I prefer to pay 10%-20% extra to avoid it entirely.

My strategy has been not to base anything on manufacturer power consumption numbers.  Look at something like the mining hardware comparison which shows the typical range based on users measuring actual, from-the-wall power consumption.  I have a bunch of 5850s, 5870s, and 6950s, and I've measured all of them with a Kill-a-watt meter myself, and my numbers have always matched up with what's on the page. 

On that note, that page shows an non-overclocked 6990 using about 350W at full load, higher once you start cranking up the clock rates.  Unfortunately, I don't actually have any of these GPUs, so I can't measure it myself.  Just saying that in the past, I have found the power consumption numbers there to be reliable.

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
September 14, 2011, 01:11:34 PM
#14
However, I believe it is a poor decision to design to you 100% of max power consumption unless you buy top-of-the-line power supplies.  If you anticipate 1200W of consumption, don't get a 1200W PSU.  It can work for a few weeks maybe even months at 100% load, but it's going to wear out eventually.    On the other hand, if you only pull 100W from a 700W PSU, it will probably run forever, no matter how crappy that PSU is.

I try to keep the max power consumption around 60-70% of max rating of the combined power supplies, and really prefer 50% unless it's a high-end PSU like a Corsair HX.  Perhaps it's paranoia, but a failing power supply can take a lot of other hardware with it, so I usually err on the side of too much power.  It's worth it to me to spend 10% more, and sleep well knowing that my hardware won't be on fire when I wake up.
newbie
Activity: 42
Merit: 0
September 14, 2011, 10:47:33 AM
#13
Hi,



Again: if you want to hook many GPUs into a single mobo,
the most I've seen wokrkng is 8 GPU core per mobo, and
bandwidth is absolutely not what you should be worrying
about.

Power distribution to the GPU cores on the other hand is
not always an easy nut to crack and this is where your
focus should be.

Cooling (and therefore spatial placement of the H/W) is
the other one.



Yeah, I know about that.
I do not intent to make any overclock on the GPU's. Likely I will only try to reduce the Memory clock ( Core clock's should be stock ) and take very care about air flow inside the custom made box.
I am planning to use Two 1000W PSU's per MB either from OCZ or from Corsair.
I have a preference for Thermaltake or Enermax, but they are both very expensive and I thing OCZ's/Corsairs will do just fine.

The idea is to have a very stable system.
I actually think that Overclocking for the 6990 is not a good thing ... what one gains from performance is lost on Power consumption and also Thermal and electrical Risk. Hence overclocinkg could be an option for say a water cooled single 6990 system. But for Quad 6990's it is really not a safe bet.

Regards.
newbie
Activity: 42
Merit: 0
September 14, 2011, 10:39:03 AM
#12
Hi,


One could send to both GPU's 500MB/3 =  166.66 "MegaBatch" calculations per second.


You're assuming "hashes" are somehow generated on the CPU
and then "uploaded" to the GPU to somehow get "processed".

That's not the way it works.

The CPU simply uploads a tiny bit of data to the GPU, on the
order of 1K (that K for kilobyte, i.e. 1024 bytes).

The GPU then sets out to chew though a bazillion prefixes to that
tiny piece of data until it either finds a hash(prefix+data) that
fits the bill or decides to give up and check back with the CPU
to see if new work has appeared.

The large computation described above happens without any
communication between the CPU and the GPU and thus requires
no bandwidth.


Precisely.
Exactly like I suspected: Because everyone told me bandwidth was not the problem I actually said that actually GPU calculation time is Way greater then data data transfered from CPU to GPU ...
The numbers showed only the proof that 1K Constant Transfers were not possible obviously ...



Regards.
full member
Activity: 196
Merit: 100
September 13, 2011, 07:39:47 PM
#11
To give you an idea, the first FPGA miners were running on serial data connections, probably 115kbaud with room to spare Wink
newbie
Activity: 42
Merit: 0
September 13, 2011, 06:52:46 PM
#10
Hi,

It is my understanding that the GPU takes in the block header than needs to be hashed, and splits up the array of potential nonces between the threads/cores.  Each thread can, itself, determine whether it's own result meets the difficulty threshold and throws it away if it doesn't.  The only communication that needs to happen is for the GPU to send the 1/2^32 results that meet the difficulty threshold (i.e. finishes computing a share), and the CPU only needs to send new work to be done after a share is complete.   We're probably looking at a few kB/sec... I think a 1x slot can handle that.

There might be a little bit more going on here, but not a lot.  As another poster said:  the bandwidth req't is nil

There you go! That settles the issue.

Well a single 500MB/s X1 PCI-e would then be enough to support All 4 Boards ... and then some more Smiley Smiley

Regards.

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
September 13, 2011, 06:37:48 PM
#9
It is my understanding that the GPU takes in the block header than needs to be hashed, and splits up the array of potential nonces between the threads/cores.  Each thread can, itself, determine whether it's own result meets the difficulty threshold and throws it away if it doesn't.  The only communication that needs to happen is for the GPU to send the 1/2^32 results that meet the difficulty threshold (i.e. finishes computing a share), and the CPU only needs to send new work to be done after a share is complete.   We're probably looking at a few kB/sec... I think a 1x slot can handle that.

There might be a little bit more going on here, but not a lot.  As another poster said:  the bandwidth req't is nil
newbie
Activity: 42
Merit: 0
September 13, 2011, 06:29:09 PM
#8
Hi,

About the issues raised:

I am using Linux, I can not imagine anyone using other OS then Linux for this or any other serious work ...  Add to this that my rigs will be in a remote location. And I have to work during the day.
I obviously was planning to use pci-e adapters with the direct PSU molex connection.

About the bandwidth:
PCI-e X1  2.0 = 500 MB/s (5 GT/s)

I already asked around in the newbies and this was indeed my first concern because I also do not have any experience in mining whatsoever although I have extensive PC assembly, overclocking and many years of troubleshooting and maintenance experience.
For a Dual GPU to share 500MB/s that means 250MB/s per GPU ...

My problem was exactly that one ... How come a, say, 750MH/s card can operate on a 500MB/s  Bus ?
If the Hashes calculated at the GPU's are sent to the cpu certainly a 500MB/s bus Can Not deliver enough Bandwidth to sustain the maximum rate of Hashing ....
Many people told me that it was not a problem ... maybe then the PCI-e bus, meaning the CPU to GPU I/O bus, is not used like that after all ...
Maybe someone could explain that to me with more detail ...

I really do have concerns about how such speed could affect CPU-GPU transfers ... I imagine (I could be wrong) the most intensive and time consuming part of the algorithm is the calculation made in the GPU's ... even so, data must be transfered to the CPU eventually ...

Some very simplified and not Accurate simple math would be to exchange say 1KB for each Stream processor :

=> 3072 X 1KB  = +/- 3MB ... per "batch".

One could send to both GPU's 500MB/3 =  166.66 "MegaBatch" calculations per second.

In the model above it would be 166.66MH /s per board ... of course this could be totally wrong ... it all depends on exactly the size of data sent, how it is sent and so on ...

I think the GPU's take more time to calculate them the bus speed would allow and also I am may not be right about How the process is done in terms of GPU-CPU I/O operation ...

If someone could explain this it would be helpful ...

Regards.



legendary
Activity: 812
Merit: 1002
September 13, 2011, 05:36:22 PM
#7
it should be theoretically possible, although you'll most likely run into problems with mobo/windows/drivers accepting all 8 GPUs. and this is with the assumption you're going to use pci-e extenders with molex power connector. you're not going to be able to pull this much power (stably) through the pci-e slots of the motherboard. i can run 6 GPUs fine, but i start having problems with the 7th physical card. i think this is a limitation of my motherboard, not windows or drivers.

only place that sells these extender with molex power is cablesaurus and me i believe:

https://bitcointalksearch.org/topic/pcie-extenders-gpu-dummy-plugs-psu-cables-case-fans-jumper-panel-extenders-43614
https://bitcointalksearch.org/topic/fs-custom-length-pci-e-extension-cables-with-or-without-molex-power-38725
vip
Activity: 1358
Merit: 1000
AKA: gigavps
September 13, 2011, 10:38:21 AM
#6

This is not a problem.

Bandwidth needs for mining are almost nil.

See zorinaq's blog, he has 4 5970 knocked into a single mobo via
x1 to x16 extenders.

The real concern when hooking lots of dual core boards in one mobo
is you're going to suck way too much current through the PCI-e bus
(beyond specs) and risk blowing/burning something up. Hence the
PCI-extenders with molex trick.



The extenders with the molex are absolutely necessary. You will also not be able to get *huge* overclocks with multiple cards. Plan for middle of the road.
full member
Activity: 196
Merit: 100
September 12, 2011, 01:33:22 PM
#5
My experience with dual-core cards is nonexistent, but I do know they divide the available bandwidth between the GPUs..  even though mining is bandwidth-minimal, I'd be concerned that a 6990 wouldn't be too happy dividing up a 1x slot.

Take it with a grain of salt, though, again I know little of these things.  If someone else has run 5970/6990 or other dual-gpu cards on 1x PCIe, please chime in Wink
newbie
Activity: 42
Merit: 0
September 12, 2011, 01:18:43 PM
#4
Hi,



How do you plan to power the whole thing ?



Of course with 375 X 4 = 1500W plus Mobo + Mem  CPU + Fans etc I would say 1650 W is the expected top power.

I will use Two OCZ 1000W PSU's. Could be also two Corsairs 1000W it depends on availability.


Regards.
newbie
Activity: 42
Merit: 0
September 06, 2011, 04:22:43 PM
#3
Hi,


I will be using Linux, not windoze ... Smiley
I would like to know if the Mobo really supports the Quad setup and the 8 GPU's under Linux, not windows.

Regards.
donator
Activity: 2352
Merit: 1060
between a rock and a block!
September 05, 2011, 05:29:02 PM
#2
Hi,


Did anyone tried to put 4 X 6990 on the MSI 970A-G45 or any other 970 Chipset motherboard?

According to specs they have 2 PCI-e 16X and 2 PCI-e 1X both 2.0.

But in any case dis anyone tried to use all PCI-e slots on something like that ?


Regards.


putting 3 dual-core cards on this mobo under windows doesn't work for me.  nothing but problems.  best I can get under 64bit win7 is 2 dual core cards + something like a 5850.
newbie
Activity: 42
Merit: 0
September 05, 2011, 09:37:52 AM
#1
Hi,


Did anyone tried to put 4 X 6990 on the MSI 970A-G45 or any other 970 Chipset motherboard?

According to specs they have 2 PCI-e 16X and 2 PCI-e 1X both 2.0.

But in any case dis anyone tried to use all PCI-e slots on something like that ?


Regards.
Jump to: