Pages:
Author

Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] - page 6. (Read 169531 times)

newbie
Activity: 20
Merit: 0
If I watercool, then I need the depth for manifolds, hose, and other apparatus, so the PSUs would still have to stack, but all of it could probably be crammed into 6 or 7 total U. As much as I would love to have a package system, I doubt I could fit an appropriate radiator and pump into that space, so it would probably have to be external like DeathAndTaxes' setup. BTW, has anyone actually done any long term tests with more than 2 watercooled cards directly adjacent to each other without a second slot in between? The more I consider it, the more I am sure it would be prone to failure, unless they weren't linked directly to each other. If that were the case, I would have to use right angle connectors on each and every card, and that would get tedious, real fast.
you can have gpus watercooled in adjacent slots, you would need these:
http://www.frozencpu.com/products/10744/ex-tub-669/Bitspower_SLI_Crossfire_Crystal_Link_Tube_Set_-_2_Slot_Spacing_BP-CLTAC-S2.html?tl=g30c101s873#blank
http://www.frozencpu.com/images/products/main/ex-tub-669.jpg
http://www.frozencpu.com/images/products/main/ex-tub-669_2.jpg
plus the fittings. all of these can be found here:
http://www.frozencpu.com/cat/l3/g30/c101/s873/list/p1/Liquid_Cooling-Fittings-Accessories-SLI_Connectors-Page1.html

I know it says "sli" connectors, but in reality its just a piece of tubing with the appropriate adapter.

if you want a self contained water cooling pump, reservoir and radiator that is capable of being rack mounted take a look at:
http://koolance.com/index.php?route=product/product&path=0_28_42&product_id=1173
http://koolance.com/image/cache/data/products/erm-3k3ua_p2-700x700.jpg
member
Activity: 85
Merit: 10
I actually have a nice sandwich going on with two 5870s(ek fcs5870v2)  I removed the mounting plates to get them as close together as possible. As you know 5870s have those stacked DVI connectors so they always take two slots but with even 5970s you can't really get them much closer to each other without having a really slim block(xspc) and some exotic sli/crossfire interconnects(Swiftech).

There hasn't been any problems yet. I do have zip ties to secure them together.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
In other news, I just confirmed a suspicion that not all the slots may be useable. Fortunately, only one of the 18 slots won't work, not 2 like I had previously thought. This is because the slot nearest to the SHB doesn't actually connect to the 2 main fanout switches on the board - instead, it routes to the x4 connector that is adjacent to the host board, and that x4 connector is used for an optional extra daughtercard that attaches via another connector to the SHB. Even if I had that expansion card and could plug it into the current board, I'm not sure that the new board (arriving Tuesday) will have that expansion capability. So as of right now, 17 useable slots.

Additionally, I've been thinking harder about a rack formfactor, and how to achieve it. Basically, the 2 main constraints are a maximum case width of about 17 to 17.5 inches, and a maximum depth of about 27 to 28 inches. Vertical space in a "normal" system that uses this board is 4U (7 inches), but this system isn't normal.

If I aircool, I need 2 levels of GPUs, so rack height without PSUs is 6-7U. PSUs might be able to fit into the depth (they are about 12 inches deep), but the fans in them blow the wrong way, and the power cords would then be coming out the "front" of the rig, from a rack perspective, which is awkward. If standing up, they need 3U, but turning them on their sides buys me 1U of height.

If I watercool, then I need the depth for manifolds, hose, and other apparatus, so the PSUs would still have to stack, but all of it could probably be crammed into 6 or 7 total U. As much as I would love to have a package system, I doubt I could fit an appropriate radiator and pump into that space, so it would probably have to be external like DeathAndTaxes' setup. BTW, has anyone actually done any long term tests with more than 2 watercooled cards directly adjacent to each other without a second slot in between? The more I consider it, the more I am sure it would be prone to failure, unless they weren't linked directly to each other. If that were the case, I would have to use right angle connectors on each and every card, and that would get tedious, real fast.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
The fan controller is coming along nicely. Would you prefer a certain variation of the arduino?
I think a nano would be perfect for this:
http://sumaoutlet.com/images/Electrical%20&%20Tools/SH460467.jpg

That would work well and be really compact if I desolder all the leads.
Whatever works, I'm not too fussed. I don't even know what the rest of the product line is like.

Is the plan for it to be manual control? Temp probe? Interface with cgminer? I don't really know what you have in mind actually, and I wasn't sure how crazy you were planning on getting with it.
legendary
Activity: 938
Merit: 1000
What's a GPU?
The fan controller is coming along nicely. Would you prefer a certain variation of the arduino?
I think a nano would be perfect for this:


That would work well and be really compact if I desolder all the leads.
Or I could stick the whole thing in a pcb and grind off the extra length of the leads.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Did you get the new backplane yet?
I think you mean the host card (SHB), and no that will arrive on Tuesday.
member
Activity: 85
Merit: 10
Did you get the new backplane yet?
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
The total expenditure so far on the project is $2,865.86, not including time spent or video cards since I had them already. (13x 5870s from Bensoutlet)
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
If loaded to the gills with big GPUs, this rig needs a lot of power. Fortunately, PDUs that can handle it are cheap these days. I got this massive sucker for less than a hundred bucks with shipping:


Rated for 62 amps at 250 VAC. It has a 25 amp circuit breaker on each outlet, although each one is rated for a max of 16 amps, and each section of 2 receptacles is rated for a max of 16 amps total. It has current monitoring on each outlet, Ethernet and serial ports for monitoring, and includes an environmental probe for temperature monitoring. The plug is massive, although I'll probably end up hard wiring it. The cable is 3 conductor 6 AWG.
Grin
legendary
Activity: 938
Merit: 1000
What's a GPU?
I was wondering what that was called, thanks. My PC is good enough to easily run a few extra windows instances for friends Cheesy
member
Activity: 85
Merit: 10
Did you guys know Microsoft has this product called "multipoint server" it allows, if I'm not mistaken, up to 10 monitors(desktops/sessions) on one computer. This of course is not the same as 10 GPUs but my point is that at least windows os is capable of handling a great number of screens so maybe it is only the driver limitation that applies to windows. It would be interesting to test these limits, but I don't have 12 monitors and my current rig won't accept any more cards either.  Tongue
legendary
Activity: 938
Merit: 1000
What's a GPU?
The tight coupling w/ xorg likely means AMD will never support >8 GPUs unless they redesign their drivers from scratch (which IMHO they should).
Agreed. And thanks for the info.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Additionaly, -- as I understand -- it is not the OS that applies the limit, but the ATI drivers.

There are multiple limits.
AMD driver has hard limit of 8 GPUs
Unmodified linux kernel does support more than 8 GPUs.
xorg (and likely aero) is incompatible with >8 GPUs
x86 OS don't support >7 GPUs due to memory mapping limitations.
Windows XP (even x64) has never worked w/ 8 GPUs.
Windows Vista/7 didn't support >4 GPUs until driver 11.5 (or was it 11.4?)
Most (all?) BIOS don't properly support >8 GPUs w/o modification (I had to turn off some unused components to even get 8 GPUs to post).

The only non-virtualized system with >8 GPUs that  I am aware of uses Nvidia (no hard driver limit and no dependency on xorg).  Even then it required a modified x64 Linux kernel and support from both NVidia and motherboard manufacturer to write a custom BIOS which properly mapped the 12 GPUs.

So it isn't simply AMD release a new driver.  The tight coupling w/ xorg likely means AMD will never support >8 GPUs unless they redesign their drivers from scratch (which IMHO they should).
legendary
Activity: 938
Merit: 1000
What's a GPU?
But i mean is there a standalone OS that can handle more than 5-6 GPU's?

Both Windows & Linux can handle 8 GPUs but no more.
thx a lot.

Additionaly, -- as I understand -- it is not the OS that applies the limit, but the ATI drivers.
full member
Activity: 182
Merit: 100
roundhouseminer
But i mean is there a standalone OS that can handle more than 5-6 GPU's?

Both Windows & Linux can handle 8 GPUs but no more.
thx a lot.
donator
Activity: 1218
Merit: 1079
Gerald Davis
But i mean is there a standalone OS that can handle more than 5-6 GPU's?

Both Windows & Linux can handle 8 GPUs but no more.
hero member
Activity: 697
Merit: 500
Can they handle more than 5-6 GPU's? I heard that a Windows Cat/Forceware Driver cant handle more than 5 ore 6 GPU's, not sure.
I wonder if I should consolidate all the information into a new topic, since no one ever reads the posts in this one.

Threads of this length become hard to parse and maintain interest through the entire thread. Especially with the last few pages consisting of the same "how will you run that many GPUs" without any progress on that front. Might be worth closing this thread, linking at the last post to a new thread with updates.
I guess I'm probably taking it a bit more personally than I should, since this is my baby. But perhaps I will start a new thread when I get the new host board.

Understandable. Keep in mind though that it is a serious time investment to read a 20 page thread if you haven't been following this from the start. Keep at it, we want to see this thing completed.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Can they handle more than 5-6 GPU's? I heard that a Windows Cat/Forceware Driver cant handle more than 5 ore 6 GPU's, not sure.
I wonder if I should consolidate all the information into a new topic, since no one ever reads the posts in this one.

Threads of this length become hard to parse and maintain interest through the entire thread. Especially with the last few pages consisting of the same "how will you run that many GPUs" without any progress on that front. Might be worth closing this thread, linking at the last post to a new thread with updates.
I guess I'm probably taking it a bit more personally than I should, since this is my baby. But perhaps I will start a new thread when I get the new host board.
full member
Activity: 182
Merit: 100
roundhouseminer
sry i dont read 20 pages, maybe the answer was allready writen to page 2, 3, 4 or where ever.  -_-
nevermind.
virtualizing, ok thx.
But i mean is there a standalone OS that can handle more than 5-6 GPU's?
hero member
Activity: 697
Merit: 500
Can they handle more than 5-6 GPU's? I heard that a Windows Cat/Forceware Driver cant handle more than 5 ore 6 GPU's, not sure.
I wonder if I should consolidate all the information into a new topic, since no one ever reads the posts in this one.

Threads of this length become hard to parse and maintain interest through the entire thread. Especially with the last few pages consisting of the same "how will you run that many GPUs" without any progress on that front. Might be worth closing this thread, linking at the last post to a new thread with updates.
Pages:
Jump to: