Pages:
Author

Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] (Read 169362 times)

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
All the parts and accessories for this build have now been sold. Locking thread.

EDIT 7/4/2017: I keep getting asked where to find the connectors for the Dell power supply. I got them from onlinecomponents.com, but Mouser has them in stock. The Molex part page is here: http://www.molex.com/molex/products/datasheet.jsp?part=active/0755425000_PCB_RECEPTACLES.xml
full member
Activity: 160
Merit: 100
Rjk,

I was looking to do basically what you did but I only wanted it to work with 6 GPUs. Do you know if there is a case available in 4u rackmount that will accept the backplane and SBC? Id like to do a six gpu setup but i need to put it into a rack enclosure. If so ill buy it from you.
legendary
Activity: 882
Merit: 1000
Hi, rjk. It's a great pity that this interesting project was finally failed.
You should have made more progress if you had asked help from AMD directly.
I heard there was an informal "Multi-GPU" project proposed by AMD last year and finally
they got 16 GPUs (8x FirePro S10000s) to work in a single rig successfully.
So I think AMD might have already developed a proprietary driver to overcome the 8-gpu limit.
See:
http://devgurus.amd.com/thread/158863
http://fireuser.com/blog/8_amd_firepro_s10000s_16_gpus_achieve_8_tflops_real_world_double_precision_/
Hey if you wanna have a go, I've got all the parts except the video cards... Just name your price.

let me  get it straight, you have managed to get it working at 11 GPUs?
if so then expoanding the rest of the way should be simple.

If it works with 11 GPUs i am willing to buy it, but shipping is international, get a quote and i will pay.

For simplicity i can pay in BTC.
420
hero member
Activity: 756
Merit: 500
Hello rjk,

Your rig is looking really nice!

Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that.

A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel.

kickass is there a video of this?

Hello 420,

Sorry for my late reply,

The system is called Fastra II: http://fastra2.ua.ac.be/

under construction
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Hi, rjk. It's a great pity that this interesting project was finally failed.
You should have made more progress if you had asked help from AMD directly.
I heard there was an informal "Multi-GPU" project proposed by AMD last year and finally
they got 16 GPUs (8x FirePro S10000s) to work in a single rig successfully.
So I think AMD might have already developed a proprietary driver to overcome the 8-gpu limit.
See:
http://devgurus.amd.com/thread/158863
http://fireuser.com/blog/8_amd_firepro_s10000s_16_gpus_achieve_8_tflops_real_world_double_precision_/
Hey if you wanna have a go, I've got all the parts except the video cards... Just name your price.
newbie
Activity: 23
Merit: 0
Hi, rjk. It's a great pity that this interesting project was finally failed.
You should have made more progress if you had asked help from AMD directly.
I heard there was an informal "Multi-GPU" project proposed by AMD last year and finally
they got 16 GPUs (8x FirePro S10000s) to work in a single rig successfully.
So I think AMD might have already developed a proprietary driver to overcome the 8-gpu limit.
See:
http://devgurus.amd.com/thread/158863
http://fireuser.com/blog/8_amd_firepro_s10000s_16_gpus_achieve_8_tflops_real_world_double_precision_/
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
rjk, do you still have the entire pwm fan controller setup that I built for this project years ago? I remember taking a lot of pride in that, being my first bitcoin-related project Smiley

Good luck selling, I hope someone will be eager to take on the challenge for sCrypt mining!
Yep I do, and that cool micro-router based wifi sharing thingy.
legendary
Activity: 938
Merit: 1000
What's a GPU?
rjk, do you still have the entire pwm fan controller setup that I built for this project years ago? I remember taking a lot of pride in that, being my first bitcoin-related project Smiley

Good luck selling, I hope someone will be eager to take on the challenge for sCrypt mining!
newbie
Activity: 58
Merit: 0
Not accepting Bitcoins fully? Madness!  Shocked
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
I'm bumping this to hopefully reel in some offers for this stuff, because I need to get rid of it. I have tons of parts that need to go. Here's a list:

The backplane itself, mounted on the custom tray/frame, and hard wired to a 1.2 KW PC Power and Cooling PSU. Because of the way this is mounted, I don't want to take it apart and I wish to sell it as a package. $750

The Sandy Bridge based CPU board with 8 GB of ECC DDR3 RAM as well as the older NLT6313 dual CPU board with 8 GB of DDR2. $1000 Or, the backplane and CPU boards for $1500 together.

All the other accessories - buy the entire package for $2000 including the backplane and CPU boards.

These accessories include:

Delta fans
IBM 240 volt PDU with 70 amp connector
Dell 2360 watt 12 volt power supplies for blade servers
6 AWG cable with Dell PSU connectors soldered on
Additional frame parts
Custom built 240 volt PDU with power cables (for connecting regular rigs)

SHIPPING IS EXTRA!
If you buy everything, it will probably take up 3 or 4 very large boxes (50+ lbs). For this reason, I will not ship internationally. I'm located in Ohio if anyone wants to pick some stuff up.

Payment: I will not take only bitcoins, but I will consider a partial bitcoin trade with cash. Since PayPal sucks, I will only consider using it with the most trusted members. I will also consider partial trades of BFL ASIC hardware.

MAKE OFFERS
full member
Activity: 210
Merit: 100
Hello rjk,

Your rig is looking really nice!

Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that.

A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel.

kickass is there a video of this?

Hello 420,

Sorry for my late reply,

The system is called Fastra II: http://fastra2.ua.ac.be/
420
hero member
Activity: 756
Merit: 500
Hello rjk,

Your rig is looking really nice!

Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that.

A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel.

kickass is there a video of this?
full member
Activity: 210
Merit: 100
Hello rjk,

Your rig is looking really nice!

Quick question on your setup. Which driver(s) are you using? Is the driver causing the 11 GPU limitation? I believe I read somewhere that the AMD drivers could only handle 8 GPUs but your rig has proven otherwise, kudos!
I don't think it's a driver limit because I don't have any drivers loaded. I am just listing the PCI devices, which as far as I know does not require a driver. That's why I'm pretty sure it's a BIOS limit. There will of course be a driver limit, likely still of 8 devices, which is why I was planning on using VT-d and PCIe passthough with some virtual machines, but I didn't get very far on that.

A research group from the University of Antwerp managed to get 13 GPUs on one ASUS motherboard. They confirm the limitation is BIOS related as you mentioned. The group ended up customizing the BIOS and hacking the Linux kernel.
newbie
Activity: 14
Merit: 0
Wow.


It's still available. I unfortunately don't know how well 3D would do on a link smaller than x16. I think there have been studies on it to show that the gaming performance hit would be nil, but rendering and other such workloads would take a major hit.

This is the most powerful SHB on the market that I know of: http://www.trentonsystems.com/products/single-board-computers/picmg-13/bxt7059
It supports up to two 8-core Intel Xeon E5-2448L processors (16 physical and 32 virtual cores with HT), and up to 96GB of RAM if you use 16GB modules in all 6 slots.

If you will be processing video, you will want this backplane instead: http://www.trentonsystems.com/products/backplanes/picmg-13-backplanes/video-processing-gpu-backplane-bpg8032
It has x16 links on all slots.

As for how many cards you can actually install and use... you would be on your own there. It requires virtualization and/or driver tricks that I have no idea how to do (that's mainly why this project flopped), and possibly BIOS mods depending on how you are trying to make it work.

If you want it, let me know.

All things considered, there's a lot of limiting factors to the idea I've got as far as this backplane goes, but with that SHB, while the chips on it would be good for a parallelizable workload, they'd likely be terrible for gaming Sad

I was doing some math... Without an AMD option, even if I could pack the machine with the appropriate Intel chips, it'd likely be more than 3-4x the cost of putting together multiple boxes, even considering duplicated hardware such as cases, power supplies, and so on.  I naturally expect a premium for enterprise hardware, but at some point.... I'm sure you get th idea Tongue

Part of me really wants to see the sight of 15 or so monitors plugged into a noisy metal behemoth. Cheesy

Shame I didn't catch this thread back in February.  Even if VTd passthrough weren't an option, I probably could have helped you with using Xen PV passthrough to make it all work, and that would have been a real pleasure to see!

Oh well.  If you want to take a stab at making it work again, drop me a line.  I could at least point you in the right direction Smiley

Also, thanks for the quick reply!
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Quote
At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it...

Probably a bunch of people, myself included, but I guess I would need some more info.

How's the 3D performance of the PCIe cards through the x4(?) electrical connection?

To your knowledge, is there an SHB available (probably AMD-based) that will allow me to pack at least 36 cores into the box?  And at least 64 GB of RAM?

I've built two multi-headed gaming boxes now out of my old mining hardware.  Fun stuff.  If there's a realistic way to stuff 18 heads into a single case (Oh hrm, I'd need more GPUs Cheesy), I'd be open to looking into it.
It's still available. I unfortunately don't know how well 3D would do on a link smaller than x16. I think there have been studies on it to show that the gaming performance hit would be nil, but rendering and other such workloads would take a major hit.

This is the most powerful SHB on the market that I know of: http://www.trentonsystems.com/products/single-board-computers/picmg-13/bxt7059
It supports up to two 8-core Intel Xeon E5-2448L processors (16 physical and 32 virtual cores with HT), and up to 96GB of RAM if you use 16GB modules in all 6 slots.

If you will be processing video, you will want this backplane instead: http://www.trentonsystems.com/products/backplanes/picmg-13-backplanes/video-processing-gpu-backplane-bpg8032
It has x16 links on all slots.

As for how many cards you can actually install and use... you would be on your own there. It requires virtualization and/or driver tricks that I have no idea how to do (that's mainly why this project flopped), and possibly BIOS mods depending on how you are trying to make it work.

If you want it, let me know.
newbie
Activity: 14
Merit: 0
Quote
At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it...

Probably a bunch of people, myself included, but I guess I would need some more info.

How's the 3D performance of the PCIe cards through the x4(?) electrical connection?

To your knowledge, is there an SHB available (probably AMD-based) that will allow me to pack at least 36 cores into the box?  And at least 64 GB of RAM?

I've built two multi-headed gaming boxes now out of my old mining hardware.  Fun stuff.  If there's a realistic way to stuff 18 heads into a single case (Oh hrm, I'd need more GPUs Cheesy), I'd be open to looking into it.
420
hero member
Activity: 756
Merit: 500
Am I mistaken or are you chaining PCI-e extenders ? I was wondering if I could do that...
Yep. For mining with its minimal bandwidth, it should be fine. Has worked in my other rigs as well, you can chain 1x as well as 16x or combine them however you like.

multiple times i've tried linking 16x risers and both times it failed under Windows 7 64-bit with an ASUS WS board 1155
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again.  Undecided

For the record, XenServer, isn't a product I'd recommend for most purposes due to my not being too fond of the company that makes it.

Xen, by contrast, is open source -- it doesn't cost anything; that includes the hardware virtualization support.  In other words, it just so happens that XenServer uses Xen as part (most) of its "commercial product," but there's no reason regular people who don't hate money can't use it also, for free Smiley
Of course, but unfortunately I'm more of a GUI guy, and don't know the Xen command line well at all.
newbie
Activity: 37
Merit: 0
It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again.  Undecided

For the record -- sounds like you're aware of this, rjk, but I've seen people that aren't -- Xen is open source, and doesn't cost anything; that includes the hardware virtualization support.

It just so happens that XenServer uses Xen as part (a big part, I'd presume) of its commercial offering, but there's no reason regular people (who don't hate money Wink ) can't use it also, for free.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Dunno if this project is dead or what... but if you continue, for the software side, you might want to consider a Xen based platform.  Xen is without doubt an ugly stepchild in the virtualization world, but it's been used extensively for PCI passthrough -- indeed, if I'm not mistaken, it even supported come crude virtual PCI passthrough capability before hardware IOMMUs came to market.

However I will warn you that Xen adds some complexity and if you're already struggling you might not like it.  Another approach would be to get rid of Proxmox and use regular linux.  If I were doing this myself, I would probably build it as a Gentoo system with a hand-configured kernel -- but again that's kinda complex.

I've tried proxmox and found it to be pretty buggy and fragile.  Great concept, but they just need to do more work on their implementation.  Everything seemed to be half-way implemented, so as soon as you deviated from the simple use-cases they designed it for, everything fell to pieces.  Actually everything fell to pieces even when I tried to follow the simple guides in their wiki.

Proxmox is basically debian with an openvz kernel and a bunch of Red-Hat cluster software.  You don't need the extra complexity that openvz or Red Hat cluster suite bring into the system -- you might have better luck just running ubuntu and kvm.  It's not that hard -- just follow the guides in the wiki and you'll be 90% of the way there.

I understand the allure of a GUI like Proxmox, but often the "simplicity" offered by such tools is illusory and ends up acting more like a "complexity loan" -- installation will be easy, sure, but then try to actually /do/ anything and you will spend more time than you saved during installation working around all the bugs and limitations of whatever mysterious pile of software is sitting underneath that pretty GUI.
Thanks for the input. Unfortunately, all of the video cards have been sold, although I do still have the rest of the platform (trying to figure out what the hell to do with it....). By the time that I got around to installing anything, KVM was repeatedly being lauded as the be-all, end-all solution, which is why I went with Proxmox. Unfortunately, that was right around a time when I got very busy and could no longer devote time to the project.

I would have preferred Xen, since I currently use the XenServer virtualization from Citrix for my other work stuff, and I know it better, but in either case it probably wouldn't have worked out anyways. I am not very good at Linux stuff in general, and slapping on a layer of complexity like hardware virtualization transformed the entire process into Greek to me.

It's possible that XenServer might work, but as far as I know the hardware virtualization requires an expensive license to use, and using open source Xen would be Greek all over again.  Undecided

Sooo... I don't really know where to go from here. I have the board all wired into the PSU, ready to go, and I have almost 5kw of 12vdc available from 2 other PSUs (but no proper distribution bus or anything). I have an awesome frame built by Spotswood (http://richchomiczewski.wordpress.com/), the backplane itself, 2 host boards (one old one and one newer one), the 1200w PSU, and the two 2360w PSUs. At the beginning of the thread is a list of parts and the total that I have spent on all of them. Wonder if someone is interested in buying it...
Pages:
Jump to: