Pages:
Author

Topic: Mining rig extraordinaire - the Trenton BPX6806 18-slot PCIe backplane [PICS] - page 4. (Read 169531 times)

rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Anything new to report? If you get a kernel panic on a linux guest during boot or just can't get a linux guest to boot, I'd be interested in knowing if a similarly configured windows VM does the same.I think we had figured out about the device id's for your devices last.
It was the host that panicked, not the guest. The output is on the previous page.

Maybe ill try a windoz guest and see what happens.
legendary
Activity: 1162
Merit: 1000
DiabloMiner author

AMD has published a specification for IOMMU technology in the HyperTransport architecture.[1] Intel has published a specification for IOMMU technology as Virtualization Technology for Directed I/O, abbreviated VT-d.[2] Information about the Sun IOMMU has been published in the Device Virtual Memory Access (DVMA) section of the Solaris Developer Connection.[3] The IBM Translation Control Entry (TCE) has been described in a document entitled Logical Partition Security in the IBM eServer pSeries 690.[4] The PCI-SIG has relevant work under the terms I/O Virtualization (IOV)[5] and Address Translation Services (ATS).

So, are we done playing wikipedia lawyer?
legendary
Activity: 1162
Merit: 1000
DiabloMiner author
Have you enabled IOMMU in the BIOS?  It is my understanding that is necessary for PCIe pass-thru.
IOMMU is for AMD; I have an Intel platform so I have VT-d. And yes, it is enabled. PVE kernel panics when I try to hotplug the devices to the virtual machine, and the virtual machine won't boot when the devices are added to the config file when it isn't booted.

No. IOMMU is a industry term for it. VT-d is an implementation of IOMMU, AMD, Sun, and IBM also offer implementations of it.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
have you tried ESX(i)? i think it may be a little bit more user friendly, vCenter management software, just install it to an USB stick, and you should be able to create VMS (if you have some storage repository), after you have done that, you can (try to) add devices with the configuration manager (vcenter), you can do that from the start but i always check if it is working the normal way...   

why did you choose PVE? you feel more comfortable with it?
It was a recommendation, because it is based on a pre-setup KVM kernel. I heard ESXi has a limitation of 4 devices though?
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Have you enabled IOMMU in the BIOS?  It is my understanding that is necessary for PCIe pass-thru.
IOMMU is for AMD; I have an Intel platform so I have VT-d. And yes, it is enabled. PVE kernel panics when I try to hotplug the devices to the virtual machine, and the virtual machine won't boot when the devices are added to the config file when it isn't booted.
sr. member
Activity: 349
Merit: 250
BTCPak.com - Exchange your Bitcoins for MP!
Have you enabled IOMMU in the BIOS?  It is my understanding that is necessary for PCIe pass-thru.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
ah I see, it is the Advantech PCE-5126WG2-00A1E  right? sorry for not reading everything (have done it now:) )
That's the one. And since I never gave a review of it earlier, I'll take the opportunity now to lay out a few points.

First, it is definitely less solid than the old Trenton board that I had before, in terms of build quality, etc. The board material seems to be a bit thinner and more flexible, and it actually came to me somewhat warped in spite of the stiffening/strengthening bar that is soldered across its length on the bottom. Fortunately, it wasn't too far off (just a few millimeters), so I was able to gently bend it back enough so that it would slot into the backplane slots.

It also seems to have a slightly less than ideal labeling on the various connectors and headers; they are all mixed in with the usual board labels such as resistors and caps, making it occasionally difficult to determine which label goes with which header. I had to reference the manual to be sure I got each connection perfectly correct.

Other than that though, it is a stellar performer. I guess you have to weigh your options between Trenton's equivalent option - the cost difference could be a deciding factor for some people, and I can say I would purchase again, because it does what it is intended to do just fine. It is loaded with almost every feature that is available on the C206 chipset, and with others besides. I guess the I would recommend Trenton's option if absolute uncompromising build quality is needed, such as in high vibration environments (airplanes, etc.), where cost isn't an object.

The BIOS is reasonably clear and understandable, with a mostly standard layout. One thing that I didn't see was the 64-bit BIOS address table option, but that may require a custom BIOS. I'm going to keep trying to see about getting it working sometime soon.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
I have a new host board that supports VT-d, but I just need to get it working. Check the past couple pages for discussion.
member
Activity: 78
Merit: 10
This is epic cant wait to see this monster chugging away at full speed. Makes me morn the loss of my 6 card setup. Keep going, Hell go the watercooling route. If for nothing else than to live in internet infamy.
legendary
Activity: 1400
Merit: 1000
I owe my soul to the Bitcoin code...
It sure sounds like you are getting closer.  Keep it up rjk!!
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
qm hostpci[n] your device id's are different than what i posted. also notice that.
OK I think I'm getting somewhere - I put this in the config file:
Code:
hostpci0: 06:00.0
hostpci1: 08:00.0
hostpci2: 0a:00.0
hostpci3: 15:00.0
and the VM started, but now I can't access its VNC terminal, it is just blank. I forgot to get its IP so I can't connect with SSH. I'll need to shut it down and remove the devices and boot it up and get the SSH config.
Update: removing all but one of the hostpci lines from the config file results in the guest booting, but not all the way. It seems to have trouble starting X; the VNC connection just flashes on and off endlessly, but the guest does respond to ACPI button commands, if nothing else. In this state, SSH doesn't seem to be enabled either.

I'll have to tinker with it later; I have other things to do right now. But thanks for the tips so far.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
qm hostpci[n] your device id's are different than what i posted. also notice that.
OK I think I'm getting somewhere - I put this in the config file:
Code:
hostpci0: 06:00.0
hostpci1: 08:00.0
hostpci2: 0a:00.0
hostpci3: 15:00.0
and the VM started, but now I can't access its VNC terminal, it is just blank. I forgot to get its IP so I can't connect with SSH. I'll need to shut it down and remove the devices and boot it up and get the SSH config.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
sorry dog, I'm in the middle of a lake, plus i'm not sober. one of those links mentioned "enabling" pci passthrough. Like Isaid I haven't tried 2.0+ yet, the instructions looked different. If you have time you could try 1.9 install or try the pve forums. Maybe wait til I sober up as well.
QFT lol. I did run the enable step, it was basically just editing grub.conf to add a flag to enable VT-d. No errors there. I'll check a few other places and maybe give 1.9 a shot.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
info
OK, I installed Proxmox VE, and have a few notes. I tried to get you on IRC, but I guess you were away.

The config files appear to be in /etc/pve/qemu-server not /etc/qemu-server Grin
I can see the video cards in lspci (testing with 4 cards to start):
Code:
root@HASH9000:/etc/pve/qemu-server# lspci -v | grep "Cypress XT"
06:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cypress XT [Radeon HD 5870] (prog-if 00 [VGA controller])
08:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cypress XT [Radeon HD 5870] (prog-if 00 [VGA controller])
0a:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cypress XT [Radeon HD 5870] (prog-if 00 [VGA controller])
15:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cypress XT [Radeon HD 5870] (prog-if 00 [VGA controller])
When I run these commands:
Code:
root@HASH9000:/etc/pve/qemu-server# qm monitor 100
root@HASH9000:/etc/pve/qemu-server# device_add pci-assign,host=06:00.0,id=GPU0
it is epic fail; it causes a kernel panic with the following output:
Code:
root@HASH9000:/etc/pve/qemu-server#
Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:------------[ cut here ]------------

Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:invalid opcode: 0000 [#1] SMP

Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:last sysfs file: /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/0000:02:06.0/0000:06:00.0/device

Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:Stack:

Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:Call Trace:

Message from syslogd@HASH9000 at May  5 14:55:28 ...
 kernel:Code: 4c 89 ef e8 58 24 ee ff 4d 39 e6 48 8b 43 10 75 cf 48 83 c4 18 5b 41 5c 41 5d 41 5e 41 5f c9 c3 49 8b 7d 20 e8 e7 9a da ff eb c9 <0f> 0b eb fe 90 55 48 89 e5 0f 1f 44 00 00 48 85 ff 74 13 8b 05

Message from syslogd@HASH9000 at May  5 14:55:29 ...
 kernel:Kernel panic - not syncing: Fatal exception
Sad
Pasting the following lines in 100.conf did nothing at all:
Code:
device_add pci-assign,host=06:00.0,id=GPU0
device_add pci-assign,host=08:00.0,id=GPU1
device_add pci-assign,host=0a:00.0,id=GPU2
device_add pci-assign,host=15:00.0,id=GPU3
And finally, these:
Code:
qm -hostpci0 05:00.0
qm -hostpci1 04:00.0
qm -hostpci2 0a:00.0
appear to be invalid commands.

So do you have any other ideas? The host is Proxmox VE 2.1, and the guest is Debian netinstall 64-bit with the GUI enabled.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
full member
Activity: 165
Merit: 100
Your Argument is Irrelephant
I have provided some incorrect info earlier on debian specifics on virtualization with KVM and Proxmox VE, I'm here to correct it. Also it looks like there is a newer version that I haven't used, so YMMV with all this. If you do go the PVE route and a KVM hypervisor, here's the docs I dug up:

Basically, you'll have a debian install and need the device ID's for all the video cards:

Code:
lspci -v |grep VGA
04:00.0 VGA compatible controller: ATI Technologies Inc Device 6719
05:00.0 VGA compatible controller: ATI Technologies Inc Device 6719
0a:00.0 VGA compatible controller: ATI Technologies Inc Device 6719

I posted incorrect paths to VM config files, the actual path is:

Code:
/etc/qemu-server/VMID.conf

Full documentation on these options:
http://pve.proxmox.com/wiki/Manual:_vm.conf

At the bottom of this page, it describes the format for PCI passthrough of Device IDs:
http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

IMO, the steps above "VT-d device hotplug" I believe to be unnecessary with PVE.
To reiterate, my functional steps to get working PCI passthrough working in the past with this setup has been:

1. enable VT-d in the BIOS. This will let us create and install KVM virtual machines once PVE is installed.
2. install PVE. It's bare-metal and uses all disk space.
3. Log into web interface, create VM however. Stop it.
4. edit my vm .conf file and add the device ID. For you, this might be something like:

Code:
device_add pci-assign,host=04:00.0,id=GPU0
device_add pci-assign,host=05:00.0,id=GPU1
device_add pci-assign,host=0a:00.0,id=GPU2

5. start your VM with new VGA devices detected.

When I say VMID, I mean the standard naming convention in PVE of 101 for the first VM, 102 for the second, and so on with the defaults. One of those links says you can do this hot-plug from a kvm guest console, which is available from web interface or cli. I've never tried that myself.

More links:
http://www.linux-kvm.org/page/VGA_device_assignment
http://www.linux-kvm.com/content/pci-passthroug-digital-devices-cine-s2
http://pve.proxmox.com/wiki/Tape_Drives

http://pve.proxmox.com/wiki/Manual:_qm
The qm (man qm)command is your friend for kvm VM management from cli on this distro.
Commands like "qm start 101" are instant win. See this:

Code:
 -hostpci[n] HOSTPCIDEVICE
 
             Map host pci devices. HOSTPCIDEVICE syntax is:
             
             'bus:dev.func' (hexadecimal numbers)
             
             You can us the 'lspci' command to list existing pci devices.
             
             Note: This option allows direct access to host hardware. So it
             is no longer possible to migrate such machines - use with
             special care.
             
             Experimental: user reported problems with this option.

So another less supported option, which I have most likely used is to simply issue a command(s) like:

Code:
qm -hostpci0 05:00.0
qm -hostpci1 04:00.0
qm -hostpci2 0a:00.0

To add my PCI devices.


http://pve.proxmox.com/wiki/Pci_passthrough
This one details "enabling" PCI Passthrough in PVE 2.0 - which I haven't used yet. This step hasn't been required in the past up to PVE 1.9 to my knowledge, but may be required now. Idk here with this.... this seems new to me, never have performed this step to my knowledge.

Enjoy the info dump homie Wink








hero member
Activity: 518
Merit: 500
As much as I like this project ( I actually was hoping / dreaming somebody trying this ), I think the $$$ is too much compared to normal cheap mobo setup.

Otherwise, great job rjk !
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Those things are mahoosive!!!  Grin

Those are the controller boards correct?
Yes, each one is basically a complete computer on a card - processor, RAM, integrated video, SATA, Ethernet, Serial, and so forth. The backplane is strictly for expansion and power. Only one per system, with this backplane.

Does this mean you are building two of these beasts now? Why do you have the "brain" for two?
The first one I got when I first started the project, but it was a cheap $200 card with ancient processors that didn't support any form of virtualization. With proper drivers that supported more than 8 devices, I would be able to use it, but with the current crippled drivers I had to shell out for a board that was capable of virtualization. The new card is a C206 chipset with an E3-1225 processor.
rjk
sr. member
Activity: 448
Merit: 250
1ngldh
Those things are mahoosive!!!  Grin

Those are the controller boards correct?
Yes, each one is basically a complete computer on a card - processor, RAM, integrated video, SATA, Ethernet, Serial, and so forth. The backplane is strictly for expansion and power. Only one per system, with this backplane.
legendary
Activity: 1400
Merit: 1000
I owe my soul to the Bitcoin code...
Those things are mahoosive!!!  Grin

Those are the controller boards correct?
Pages:
Jump to: