Pages:
Author

Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] - page 3. (Read 169522 times)

newbie
Activity: 46
Merit: 0
I hope that window doesn't leak!!   Cry
legendary
Activity: 938
Merit: 1000
What's a GPU?
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Well I was trying to get a new post written up for another thread, but it is a lot harder than I thought. The main problem I have now is that with 12 cards installed, only 11 are shown in lspci Sad

Anyways, I finally took a deep breath and dove into the wiring of the PSU, and I can say I am pretty proud of the result:







hero member
Activity: 518
Merit: 500
Any updates ?

I see the nice setup in the magazine thread Wink
legendary
Activity: 938
Merit: 1000
What's a GPU?
I prefer printers as peripherals Wink
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
How much does it cost to walk around and pick through everything? And do you know if they have any old computers? I could always use another c64 Smiley
How about an IBM Selectric II? Grin

member
Activity: 85
Merit: 10
Not really, I just got the fan controller but haven't had a chance to install it. I'd like to do it tomorrow, but that's the day of the state auction, and I sure as heck am not going to miss that. http://www.das.ohio.gov/Divisions/GeneralServices/Surplus/WarehouseNextAuction.aspx Grin

Mmmm serrrrrvers Tongue
legendary
Activity: 938
Merit: 1000
What's a GPU?
How much does it cost to walk around and pick through everything? And do you know if they have any old computers? I could always use another c64 Smiley
legendary
Activity: 1274
Merit: 1000
I work just down the street from our statue university. Their surplus outlet is always packed with equipment. 3 story building, 1st floor is almost completely dedicated to computers and servers.
legendary
Activity: 938
Merit: 1000
What's a GPU?
Ahh I really wish I could go to that. Piles of computers!!
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Any update ? This was the most interesting mining non-FPGA thread.

Is the Rig all running now ?
Not really, I just got the fan controller but haven't had a chance to install it. I'd like to do it tomorrow, but that's the day of the state auction, and I sure as heck am not going to miss that. http://www.das.ohio.gov/Divisions/GeneralServices/Surplus/WarehouseNextAuction.aspx Grin

I also only have 13 cards, so I need to get some more. I guess I'm kind of waffling around, hoping that a pile of 7990s drop in my lap sometime soon, although that is rather unlikely lol.
sr. member
Activity: 336
Merit: 250
Any update ? This was the most interesting mining non-FPGA thread.

Is the Rig all running now ?
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
luke-jr is virtualizing gpu's with kvm. I'll try to dig up that info or have a conversation with him or see how he likes this thread.
You know, I did discuss it with him months ago, but that was only the most basic bare minimum of information, because I didn't have compatible hardware. I'll need to grep my IRC logs to see if he gave out any technical information. All I remember was that it was 1 GPU and that it kept crashing.

EDIT: I found some stuff, and I know there is more, but I can't be bothered to find it all right now.
Code:
Dec 26 19:54:12 	gmaxwell: i thought there was a driver limit of 8 gpus maximum
Dec 26 19:54:28 Driver. Yup.
Dec 26 19:54:35 So run multiple copies of the driver. :)
Dec 26 19:54:55 (according to luke the ati drivers work in kvm with the pci remapping stuff)
Dec 26 19:55:35 so in _theory_ you could start two VMs, map 4 cards to each and run 16 GPUs on a system.
Dec 26 19:55:45 (presuming you could plug in that many)
Dec 26 19:55:58 (and presuming your motherboard doesn't catch fire)
Dec 26 19:57:38 I have been trying to find someone that has actually been able to mine successfully in a virtualized environment
Dec 26 19:57:48 luke-jr:
Dec 26 19:57:52 i know it ought to work
Dec 26 19:58:16 gmaxwell: only with motherboards/CPUs that support it
Dec 26 19:58:44 vt-d, and iommu right
Dec 26 19:58:49 VT-d in my case
Dec 26 19:58:54 but does it work well?
Dec 26 19:58:55 stable?
Dec 26 19:58:58 also, Intel pulled a big scam earlier this year
Dec 26 19:59:02 because i want to try it
Dec 26 19:59:09 well, my Radeon still manages to crash my host system sometimes
Dec 26 19:59:17 hmm
Dec 26 19:59:25 rjk2: you get diminishing returns packing more cards onto one system. ::shrugs::
Dec 26 19:59:26 usually when I try to reinitialize it
Dec 26 19:59:30 which is mostly playing around
Dec 26 19:59:38 if I knew a proper way to do that, it might be OK
Dec 26 20:00:09 what i wanted to do was use intel vt-d with  built-in video powering my desktop environment, and discrete cards mining
Dec 26 20:00:23 rjk2: tht's what I do
Dec 26 20:00:24 z68 based
Dec 26 20:00:46 luke-jr: sounds good, using kvm or xen?
Dec 26 20:01:21 KVM
Dec 26 20:01:47 Z68 = no VT-d
Dec 26 20:01:53 wait what
Dec 26 20:02:05 * catalase_ is now known as catalase
Dec 26 20:02:08 i know the K procs with overclocking don't have it
Dec 26 20:02:17 but z68 doesn't either?
Dec 26 20:02:20 nope
Dec 26 20:02:26 http://ark.intel.com/compare/52816,52812
Dec 26 20:02:31 damn it
Dec 26 20:02:37 return it :P
Dec 26 20:03:13 is that what "embedded options available" means? VT-d?
Dec 26 20:03:19 no
Dec 26 20:03:26 Intel® Virtualization Technology for Directed I/O (VT-d)
Dec 26 20:03:37 it's exclusive to Q67
Dec 26 20:03:38 oh, i see its down lower
Dec 26 20:03:41 at least for Sandy Bridge
Dec 26 20:03:45 hmm
Dec 26 20:04:01 originally it wasn't
Dec 26 20:04:03 man that is lame
Dec 26 20:04:08 but Intel bugged, and decided the solution was to fix their specs
Dec 26 20:04:20 ie, they said the bug was the advertising VT-d on the others
Dec 26 20:04:31 doh! that sucks.
Dec 26 20:04:34 yep
Dec 26 20:04:38 but they also bugged the SATA
Dec 26 20:04:46 so I returned my H67 and got a Q67
Dec 26 20:04:48 :D
Dec 26 20:04:53 lol too true >.>
Dec 26 20:04:58 though it was VERY difficult to get Q67 at the time
Dec 26 20:07:34 luke-jr: have you had the opportunity to test IOMMU (AMD) systems in the same manner yet?
Dec 26 20:08:21 no
Dec 26 20:08:36 I think someone did though, and found they had to do 1 VM per card
Dec 26 20:08:42 lolwut
Dec 26 20:08:44 I've only tested 1 on VT-d
Dec 26 20:09:18 the annoying thing is, if the card crashes, I pretty much have to reboot
Dec 26 20:09:33 in theory, I *should* be able to 'remove' the card and power cycle just it
Dec 26 20:09:39 but I haven't figured out how in practice
Dec 26 20:09:51 even if I remove and rescan, it doesn't get reset
Dec 26 20:10:09 upgrading fglrx *might* help
Dec 26 20:10:28 since newer versions support hot-switching between IGP and discrete on laptops
Dec 26 20:10:58 (the error I was getting found only Google results talking about that)
Dec 26 20:11:37 latest fglrx *seems* to be working just as well as the good old 2.1 now tho
Dec 26 20:12:19 ie, no CPU hogging, and full 308 MH/s
Dec 26 20:12:48 I don't have a sane way to reproduce the card crashing though, so no idea how many months until I can confirm it reinitializes
Dec 26 20:12:48 is that with one card?
Dec 26 20:12:51 yes
Dec 26 20:13:04 i built myself a mini-itx system, jsut to do that
Dec 26 20:13:04 also, OpenCL is a memory hog :/
Dec 26 20:13:37 rjk2: note that PCI bus, and probably PCI-E extenders, will be all-or-nothing to VMs
Dec 26 20:13:49 ie, you might be unable to isolate single cards in separate VMs

And, the issue requiring a restart that Luke-jr is discussing was worked around later in a different chat (put the machine to sleep, and bring it back out, instead of rebooting).
hero member
Activity: 560
Merit: 501
luke-jr is virtualizing gpu's with kvm. I'll try to dig up that info or have a conversation with him or see how he likes this thread.
Just don't mention religion.
hero member
Activity: 697
Merit: 500
If ram becomes an issue I'd suggest just trying 8 GB dimms. It may work, I've seen more bizarre things than that.
hero member
Activity: 697
Merit: 500
LinuxCoin maybe? although it doesn't say how much RAM you need

http://www.linuxcoin.co.uk/wiki/index.php/Headless_Linuxcoin

Quote
The disk image contains the AMD APP SDK, ATI Catalyst & Drivers, poclbm & phoenix miner, Bitcoin, and more.

the Xeon E3-1225  seems to have just 4 cores & 4 threads .. and i think the host wants one , so maximum 3 virtual machines possible, each getting a thread?

You can overprovision the CPU on a VM host. So long as you don't peg the CPU cores on all your VMs, the hypervisor will load balance your CPUs to provide enough CPU power for your VMs. The major obstacle is always ram and storage IOPs. Luckily ram is dirt cheap.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
What's the smallest Linux install that I can have that will mine at full speed, and how little RAM can I assign to it? If I have to have 2 devices per VM, that would mean 17 VMs in a dual-GPU situation, which would suck. And, since I have only 8GB of RAM, I will have to have fewer than 512MB RAM per VM. I don't know how much RAM PVE uses for Dom0, but if I assume 1GB (like XenServer), that leaves me with 7GB divided by 17 VMs, equals ~420MB RAM per VM. I suppose that should be enough for something that is stripped down, but I doubt it would run BAMT with all its monitoring and stuff.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Having to unbind it from the host first makes perfect sense, I didn't really think of that. I'll have to try those steps and see how I get on.
Played with it a little more.
Unbinding and rebinding to PCI_stub seemed to work (Is this persistent? If not, how do I make it save these settings between reboots?), and I was able to attach a device to a VM. However, X segfaulted with error 11. I was able to lspci and see the video card from inside the VM though. PVE doesn't seem to like having more than 2 PCI devices per VM, and if you add more to the VM config, it simply removes them for you.

I tried installing a new VM with the cards pre-attached, and the bootdisk just showed a blank screen and went no further. I tried installing Windows 7 x64, but it kept asking for a driver for the CDROM which I didn't have or know where to get.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
one quick note, after what you said i remember you have to unbind the pci device first (logical right), have you done that? i cant recall having read it(but that doesnt say too much)

4. unbind device from host kernel driver (example PCI device 01:00.0)
 Load the PCI Stub Driver if it is compiled as a module
       modprobe pci_stub
 lspci -n
locate the entry for device 01:00.0 and note down the vendor & device ID 8086:10b9
        ...
       01:00.0 0200: 8086:10b9 (rev 06)
       ...
 echo "8086 10b9" > /sys/bus/pci/drivers/pci-stub/new_id
echo 0000:01:00.0 > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
echo 0000:01:00.0 > /sys/bus/pci/drivers/pci-stub/bind

http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM


in xen it was something with pciback

http://wiki.xensource.com/xenwiki/Assign_hardware_to_DomU_with_PCIBack_as_module

really not sure though..



Having to unbind it from the host first makes perfect sense, I didn't really think of that. I'll have to try those steps and see how I get on.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
I'm on my phone so it makes it a little hard to elaborate on all the points, but I'll try to remember all of them.

Yes there is onboard video, it doesn't show in that list because - grepped for "Cypress XT" not "VGA". When I grepped for VGA, I got several screens of output and didn't sift through it thoroughly.

I have tried hotplugging and coldplugging different devices. Yes, I would assume that the host is trying to use one of the cards and that could cause a conflict. I just had a thought - would it be possible that the host is glomming on all the cards just because they are there? I should change my BIOS to boot the internal video first and use it as primary display.

When hotplugging, the host (dom0) was what panicked. When coldplugging, the guest (domU) either flashed the VNC terminal on and off repeatedly (when a single device was configured) or showed a black screen and froze (no networking or anything) when all cards were assigned.

Unfortunately development is kind of on hold because of time constraints (weekender) and because these fans at 100% are damn annoying.

I need to get the following list of stuff done:
Custom bracket - need to find a small fabricator near me
Copper bus bar - determine correct size, layout, and method of attaching conductors
PSU - figure out how to turn it on
PCIe 6/8 pin connectors - obtain a bunch and figure out how to hook them to the bus bars (ring/spade terminals? solder? clamp?)
Air channel/duct - needed so that the air doesn't escape past the heatsinks over the top - also a job for the fabricator - needs to be slotted so the upper level PCIe extenders can pass thru
Fan contoroller - wire it in once it arrives and set up the control software (to be written, coder is lined up for the job)

Damn, I wish it was as easy as throwing some money at it and putting it together like Legos. Tongue Problem is, I'm kinda weak when it comes to the software and scripting/programming side of things. Once it's done, does anyone wanna buy it and have the distinction of owning the worlds fastest, most complicated GPU based rig? Grin It will need a 7KW power outlet and cooling. Grin
Pages:
Jump to: