Author

Topic: Hardware question: GPU over USB3 instead of PCIe (Read 1462 times)

full member
Activity: 322
Merit: 105
I have tested this expansion card from aliexpress and works in

z97 Gigabyte Mx gaming 5
z170 Msi gaming 7
z270 Asus prime A

Dont work on Asrock J1900 ( Grin Grin Grin Grin)

member
Activity: 79
Merit: 36
HODL. Patience.
Does anyone have experience with this GPU-oriented PCIe Expansion Cluster (http://amfeltec.com/products/gpu-oriented-cluster/)? Supposedly allows up to 16 GPUs per each PCIe slot in a motherboard. 64 GPUs, one motherboard, anyone?

Now, if only someone would make a SBC with one or more PCIe slots. On the other hand, there are already PCBs with mini-PCIe. Hmm. Thinking, thinking.

===

Off-topic, slightly, but does anyone also cpu-mine with their rig? Seems a waste if the CPU is powered but not hashing.
member
Activity: 79
Merit: 36
HODL. Patience.
I also wondered about adapter cards which simply provide 3-4 pcie x1 slots in a single x16. No-go. Couldn't find any.

I was wrong. Something like this might work really well for a mini-ATX motherboard with single PCI x16 slot to drive three GPUs.

SINTECH PCI-e express 1X to 3port 1X Switch Multiplier HUB riser card +USB cable
https://www.newegg.com/Product/Product.aspx?Item=9SIA6UM45G4650&cm_re=pci_riser-_-9SIA6UM45G4650-_-Product

or, more interesting because it would be _less_ bandwidth-limited

Supermicro RSC-R2UG-2E4E8 Riser Card
https://www.newegg.com/Product/Product.aspx?Item=9SIA5EM3N86911&cm_re=pci_riser-_-15-121-066-_-Product

If that SINTECH were a 16x to 3port 1x, it would be superb; still wouldn't reach 15+ GPU/mobo, but pretty neat.

----

I'm moving the technical/drivers thing to a discussion in a linux forum. I imagine the block to plugging powered riser slots into the USB bus has mostly to do with drivers. There's no real reason, possibly except for that, for why the x1 protocols couldn't be carried on the USB bus. Really, you would just need some wrapper code to enclose the PCI protocols inside the USB protocols; and then some sort of kernel-level discovery/port-scan code because nothing's going to look for PCI peripherals on the USB bus.
member
Activity: 79
Merit: 36
HODL. Patience.
USB risers are not using USB protocol, they're just using the cable since it's pretty durable and works easily even over distances (unlike ribbon).

GPU bandwith doesn't matter in terms of the riser because the GPU isn't pushing all the calculation data back to the riser and the system, it only sends back what it solved from the huge amount of data.

Which is why 1x PCI-e (USB risers) is more than enough (250 MB/s) for virtually every algo.

Which is why I thought it. I also wondered about adapter cards which simply provide 3-4 pcie x1 slots in a single x16. No-go. Couldn't find any.

I don't know of a USB 3 controller that could process say 10 separate risers.

Who needs it? And it wouldn't work, anyway. A single controller would be bandwidth-limited by 10 cards. It could do five, though. There are PCIe x16 -> 5x USB3.1 cards. So three cards in three slots provide 15 cards (not many mobos with 4+ full-bandwidth PCIe3 x16 slots), plus the onboard USB3 ports, if any. That's three or four separate 6GB-bandwidth controllers.

Thunderbolt is a great technology.
This work well for PCIe expansion over thunderbolt.
http://www.sonnettech.com/product/echoexpressse2.html

That's very interesting. Alienware makes a similar unit, with an inbuilt PSU big enough to handle _real_ video cards. ;-)

But, ultimately, we'd want to gut the case because a bigger PSU powering more cards is more energy efficient.

And we still don't actually need the bandwidth of Thunderbolt. USB3 is more than enough and considerably less expensive. I can't find the link, now, but while I was searching around last night I saw a header in the AMD forums suggesting they're working on USB-interface cards specifically for use as math coprocessing, which is what altcoin miners are using them for, but they also are in use in render farms, weather centers and anywhere else in need of more floating-point-operations-per-second (FLOPS) than CPUs can provide.
sr. member
Activity: 307
Merit: 250

I don't know of a USB 3 controller that could process say 10 separate risers.

But I think it could be done with the right thunderbolt connections as a mobo to t wire to pcie does exist.

And t bolt wire can handle a lot of data quickly.

Btw I like the way you think

Thunderbolt is a great technology.
This work well for PCIe expansion over thunderbolt.
http://www.sonnettech.com/product/echoexpressse2.html

Using it in a system integration at work.
Connected to a laptop.

legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
USB risers are not using USB protocol, they're just using the cable since it's pretty durable and works easily even over distances (unlike ribbon).

GPU bandwith doesn't matter in terms of the riser because the GPU isn't pushing all the calculation data back to the riser and the system, it only sends back what it solved from the huge amount of data.

Which is why 1x PCI-e (USB risers) is more than enough (250 MB/s) for virtually every algo.



A lot of miner software requires quite some CPU power per card depending on the algo so having like 12 cards in a rig might be problematic (unless you spend a lot on a CPU).

Also, for Nvidia at least, you have to have enough RAM and/or pagefile for all the cards memory that will get used together combined. That might also get problematic.

And if a card/riser/cable is misbehaving and crashing your rig, good luck finding it in a 12 card rig.


With all that said, if it would work without issues (wouldn't need specialized miner, etc) it could be a game changer.
hero member
Activity: 672
Merit: 500

As I look around at the powered PCIe_x16 risers and note the USB3 connection on so many

*facepalm*
legendary
Activity: 1848
Merit: 1165
My AR-15 ID's itself as a toaster. Want breakfast?
You have to factor if it is a USB 3.0 (gen 1) or USB 3.1 (gen 2)

I am confused in a thread of my own about how a GTX 1080 that has 256 GB/s of bandwidth w/o overclocking isn't bottlenecked by the PCIe bandwidth limit. The USB 3.1 only does 10 GB/s
if im not mistaken that's memory bandwidth between GPU and GPU RAM......  not bus speeds at PCIE.... Those have been relatively constant and the same for many many years;  they have just added more of them to be able to increase data amounts that can be sent at once...
sr. member
Activity: 445
Merit: 255
I think that the main issue is related to the interface, due bandwidth and programming needs.

Certainly, by utilizing another standard it could imply other benefits, but moving after the PCIE standard it's a risky, and complex, move. Probably it's just premature
sr. member
Activity: 420
Merit: 250
You have to factor if it is a USB 3.0 (gen 1) or USB 3.1 (gen 2)

I am confused in a thread of my own about how a GTX 1080 that has 256 GB/s of bandwidth w/o overclocking isn't bottlenecked by the PCIe bandwidth limit. The USB 3.1 only does 10 GB/s
full member
Activity: 154
Merit: 100
I'm building, in my head at the moment, a GPU miner. The thought experiment is thus: Maximize electrical use by reducing the non-GPU components in a mining rig. Think 1 mobo/RAM/SSD/CPU/et-cetera component of 100+ combined watts depending on actual components driving 30 or more GPUs via the inbuilt USB3 bus(es) and utilizing multiport USB3 PCIe x16 cards. Each bank of five, or so, GPUs would need a separate PSU, sure, so this theoretical rig would have to be powered from several different household circuits. I was thinking on using a 30A3p240vac - essentially an appliance circuit - and splitting each phase into two 15A circuits.

As I look around at the powered PCIe_x16 risers and note the USB3 connection on so many, I can't help but wonder why the PCIe_x1 connector is needed? The USB standard is more than 6x faster than the x1 lane. Since we're not trying to actually display anything from these cards, why do they need to be on the PCI bridge?

Okay, now I'm donning the flameproof anorak. Fire away. Explain why this won't work or, even better, prove to me that it will!


I think this would be an awesome idea, if someone could accomplish this it could be a game changer.  I agree, if its using 1x, no need for the pci lanes.  USB graphics are already out there, someone could just right the code to run over USB
legendary
Activity: 4116
Merit: 7849
'The right to privacy matters'
I'm building, in my head at the moment, a GPU miner. The thought experiment is thus: Maximize electrical use by reducing the non-GPU components in a mining rig. Think 1 mobo/RAM/SSD/CPU/et-cetera component of 100+ combined watts depending on actual components driving 30 or more GPUs via the inbuilt USB3 bus(es) and utilizing multiport USB3 PCIe x16 cards. Each bank of five, or so, GPUs would need a separate PSU, sure, so this theoretical rig would have to be powered from several different household circuits. I was thinking on using a 30A3p240vac - essentially an appliance circuit - and splitting each phase into two 15A circuits.

As I look around at the powered PCIe_x16 risers and note the USB3 connection on so many, I can't help but wonder why the PCIe_x1 connector is needed? The USB standard is more than 6x faster than the x1 lane. Since we're not trying to actually display anything from these cards, why do they need to be on the PCI bridge?

Okay, now I'm donning the flameproof anorak. Fire away. Explain why this won't work or, even better, prove to me that it will!

I don't know of a USB 3 controller that could process say 10 separate risers.

But I think it could be done with the right thunderbolt connections as a mobo to t wire to pcie does exist.

And t bolt wire can handle a lot of data quickly.

Btw I like the way you think
member
Activity: 79
Merit: 36
HODL. Patience.
I'm building, in my head at the moment, a GPU miner. The thought experiment is thus: Maximize electrical use by reducing the non-GPU components in a mining rig. Think 1 mobo/RAM/SSD/CPU/et-cetera component of 100+ combined watts depending on actual components driving 30 or more GPUs via the inbuilt USB3 bus(es) and utilizing multiport USB3 PCIe x16 cards. Each bank of five, or so, GPUs would need a separate PSU, sure, so this theoretical rig would have to be powered from several different household circuits. I was thinking on using a 30A3p240vac - essentially an appliance circuit - and splitting each phase into two 15A circuits.

As I look around at the powered PCIe_x16 risers and note the USB3 connection on so many, I can't help but wonder why the PCIe_x1 connector is needed? The USB standard is more than 6x faster than the x1 lane. Since we're not trying to actually display anything from these cards, why do they need to be on the PCI bridge?

Okay, now I'm donning the flameproof anorak. Fire away. Explain why this won't work or, even better, prove to me that it will!
Jump to: