Pages:
Author

Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] - page 24. (Read 169362 times)

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Quote
For power, I got myself a Dell 2360 watt power supply from their current-generation blade servers - it cost me $95
That's pretty cheap for a 2KW power supply Shocked. any disadvantages in using this rather than a standard ATX power supply?
Yes, mainly that I will have to solder up some custom connections to it, and figure out how to turn it on. See the picture in the imgur album of the output connectors on the thing.

Other than that though, not really - it has very high efficiency (80Plus Gold level), and is very compact in size (smaller than one of my PCP&C 1200 watt PSUs).

Found some info on the 8 GPU limit and part of the issue appears to be BIOS limitations.

http://fastra2.ua.ac.be/?page_id=214

Would Infiniband allow access to more than 8 GPU's?
Well, the BIOS in this setup already has a 64-bit addressing option (currently disabled). I wasn't able to make BAMT boot properly with it enabled, and Windows wanted a driver (I didn't know what driver to provide, so I was never able to install Windows). I have no idea how Infiniband works, sorry I can't say there.

-----------------------

In other news, this thing is gonna end up costing me. Grin I just got a PM from the guy that makes awesome miner frames (Website: http://richchomiczewski.wordpress.com/), and since I am such a sucker for a well-designed frame/case, I'm getting him to quote me on a custom version to fit this board. I hate all these sigs that solicit donations for no reason, but I wonder if I oughta start posting one lol Tongue

Firstbits: 1ngldh
sr. member
Activity: 448
Merit: 250
Nice to see this thing already pumping away...

full member
Activity: 196
Merit: 100
Web Dev, Db Admin, Computer Technician
Found some info on the 8 GPU limit and part of the issue appears to be BIOS limitations.
Quote
Initially, when we first connected 13 GPUs, the system refused to boot. After discussing this with ASUS, we concluded that the problem was that the GPUs required a larger block of physical address space than the BIOS was able to provide. The 32 bit BIOS can only map the PCI devices (including PCI-E) below the 4GB boundary, so this meant there was at most roughly 3GB of address space available for the devices. Because each GPU requires a block of 16MB, a block of 32MB and a block of 256MB, only 8 or 9 GPUs worked, depending on how many on-board devices we disabled in the BIOS setup. Adding more cards than that caused a boot failure.

ASUS was extremely helpful with solving this, and they provided a custom BIOS for our motherboard that skipped the address space allocation of the GTX295 cards entirely.  This is also the reason we have a single GTX275 card in the FASTRA II: it is the one card that is fully initialized by the BIOS and can provide graphics output to the monitor.

With this custom BIOS, the system booted successfully, but without working GTX295 cards since those were not initialized yet. To enable these cards, we modified a Linux 2.6.29.1 kernel (the latest at the time) to allocate physical address space to the GPUs manually. Since the kernel is 64-bit, we could map the large 256MB resource blocks above the 4GB boundary, thereby ensuring there was plenty of room for them. The smaller 16MB and 32MB blocks easily fit below 4GB, where the GPU required them.

The remaining problem was unexpected: each GPU requires a block of 4KB of I/O port space, for which only 64KB is reserved in total. Together with low-level system devices and devices like network and USB controllers also taking up I/O space this was a very tight fit. We needed to re-map inefficiently allocated system devices and disable as many devices as possible entirely, such as the RAID controller and the second network controller. From later experiments we suspect it might actually only be necessary to allocate this 4KB block of I/O ports for the primary VGA controller, but we haven’t verified that.
http://fastra2.ua.ac.be/?page_id=214

Would Infiniband allow access to more than 8 GPU's?
legendary
Activity: 2058
Merit: 1431
Quote
For power, I got myself a Dell 2360 watt power supply from their current-generation blade servers - it cost me $95
That's pretty cheap for a 2KW power supply Shocked. any disadvantages in using this rather than a standard ATX power supply?
hero member
Activity: 533
Merit: 500
Awesome job there!  Wow they're packed in tight (but that's the point)!  Good to know they all work just fine.
legendary
Activity: 3472
Merit: 1721

From around 2.5 ghash to almost 3. woot.

Wink
You might get some more mhash/s by increasing VRAM clock to 300-350MHz, but it shouldn't be more than ~2-4MHash/s per card also increasing temp by ~1.5-2 more degrees (that was my case).


rjk
sr. member
Activity: 448
Merit: 250
1ngldh
These are 5870s? Shouldn't you be getting ~415-425MHash/s with these clocks?

EDIT: now I see, it is because of 160MHz VRAM clocks. Raise them to 260MHz to get ~70MHash/s more.

From around 2.5 ghash to almost 3. woot.

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
This doesn't have good perspective, so it's hard to see the issue, but this image shows where the molex connectors on the board are, and where they get in the way of the 5870.
My next project is to get some 14 gauge wire and some terminal blocks with bus connectors, and wire up those black terminal strips, so I don't have to use the white molexes at all.




These are 5870s? Shouldn't you be getting ~415-425MHash/s with these clocks?

EDIT: now I see, it is because of 160MHz VRAM clocks. Raise them to 260MHz to get ~70MHash/s more.
Would 260 really get me that much more hash? brb
legendary
Activity: 3472
Merit: 1721
These are 5870s? Shouldn't you be getting ~415-425MHash/s with these clocks?

EDIT: now I see, it is because of 160MHz VRAM clocks. Raise them to 260MHz to get ~70MHash/s more.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
That's cool.

Are you going to add 8? 9? 10? I thought I read the issue with 8+ is the linux ati drivers? Anyway this is freaking badass, please let us know how it goes adding more.
The 8 GPU limit is still alive and well, as far as I know. I have not yet tried more than 8 GPUs.
hero member
Activity: 728
Merit: 501
CryptoTalk.Org - Get Paid for every Post!
That's cool.

Are you going to add 8? 9? 10? I thought I read the issue with 8+ is the linux ati drivers? Anyway this is freaking badass, please let us know how it goes adding more.

UPDATE:

I was having software issues with one of my existing 6-card rigs, and I had a spare card around, so I whipped this together. The frame and mobo next to the backplane aren't doing anything; I just didn't feel like taking the PSUs out of the frame to hook this up. I ran out of molex connectors, so I had to use a third PSU. The main PSU is a 1300 watt Rosewill Lightning, and the other 2 are Cooler Master GX750s.

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Very pretty but will 5870s get enough air like that?
So far they are happy and stable at 930/160 50% fan - doing between 61 and 82 degrees. The 82 degree card is squashed in tight because I wasn't able to insert a quarter-inch shim (too tight). I'm going to leave it mining for 24 hours and see what happens.

Actually I just tried again to see, and I was able to get a shim in partially, which made the card drop down to 78 degrees. I'll leave it like that.

EDIT: Down to 75 now. What a difference a quarter inch makes.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Very pretty but will 5870s get enough air like that?
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
UPDATE:

I was having software issues with one of my existing 6-card rigs, and I had a spare card around, so I whipped this together. The frame and mobo next to the backplane aren't doing anything; I just didn't feel like taking the PSUs out of the frame to hook this up. I ran out of molex connectors, so I had to use a third PSU. The main PSU is a 1300 watt Rosewill Lightning, and the other 2 are Cooler Master GX750s.

So this is seven (7) HD 5870s. Not a record breaker, I know, but just proving that the system works so far. It was a simple plug-n-play experience, after configuring BAMT. No major hurdles, it just worked. Clocks are currently 930/160 on all cards, fan speed is 50%, and everything else is using BAMT defaults. I would have gone 950/160, but the airflow is rather restricted.


Setting up, not started yet.


Mining!


Sorry, the flash reflected on the screen, but you can read most of it.


The PDU that I am using won't plug into my Kiilawatt, because it has a high-amperage plug. But its readout shows 11 amps at 120 VAC.
hero member
Activity: 742
Merit: 500
hero member
Activity: 628
Merit: 500
i looked at these b/c i have some rack space,but the price was too high. I'll also tag along to see where this goes. 
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
EDIT: Spoke on IRC, and it seems that VT-d was the idea, but unfortunately my current hardware doesn't support it Sad
Sorry, you guys must not have seen my edit.

Intel had VT-d, (my SHB is too old to support it, see end of page 1), and AMD's equivalent is IOMMU. They both do the same thing.

Seems like VT would work though; how much are VT-d capable host boards?
Starting at more than $1600 new, and progressing up to more than 3 grand for a dual quad core xeon beast.
sr. member
Activity: 448
Merit: 250
I am tagging along to watch the progress on this beast. Someone's datacenter is gonna get a lot louder...
hero member
Activity: 560
Merit: 500
Ad astra.
Do tell? If it is anything with running multiple X sessions, I am enough of a linux noob to not know how this works or where to begin. If you mean by using VT-d and some virtual machines to overcome the driver limits, sure sounds good but my hardware is not capable (no VT-x or VT-d unless I get a more expensive host board). Or, if you mean modifying the drivers in any way, or using specific versions without a limit, perhaps you will enlighten me in that regard?

EDIT: Spoke on IRC, and it seems that VT-d was the idea, but unfortunately my current hardware doesn't support it Sad

Seems like VT would work though; how much are VT-d capable host boards?
full member
Activity: 210
Merit: 100
Really? Also please don't say virtualization these are non VT-d capable CPUs.
I don't feel like digging through spec sheets, doesn't it take a Faildozer^WBulldozer-based CPU to get amd-vi (AMD equivalent of VT-d)? Ugh...
Intel is even worse as even some of their server-grade CPUs lack VT-d support.
Pages:
Jump to: