Author

Topic: Passively splitting a single PCI-E 16x slot into 16 PCI-E 1x slots (Read 27739 times)

full member
Activity: 213
Merit: 100
If you Google for "diy vidock x2" you'll come across threads in which folks have combined 2 pcie x1 lanes into a single x2 connection for the purpose of connecting external videocards to laptops.  If they can do that, then surely we can do the reverse, right?  Smiley

Halfway down here: https://forum.lowyat.net/topic/2019705/all you can see a photo in which someone has cables plugged into two mini pcie connectors on a laptop motherboard.
sr. member
Activity: 406
Merit: 250
Linux has a hard limit of 8 cards. I see reports of people using 13 on windows.

Where you saw these reports? Thanks

Edit: I found it.

http://devgurus.amd.com/thread/158863

Original source is currently unavailable, but I have located i ton the web archive.

http://web.archive.org/web/20130429065445/http://fastra2.ua.ac.be/
hero member
Activity: 708
Merit: 502
Linux has a hard limit of 8 cards. I see reports of people using 13 on windows.

Where you saw these reports? Thanks
sr. member
Activity: 406
Merit: 250
I can see where having a Cable Y adapter w/maybe an inline chip would be a great thing, because it would free up the limited motherboard selections that allow 4 or 6 GPU's

However, a main problem is the Windows OS + card drivers seem to currently be limiting the number of popular GPU cards to 6.


Linux has a hard limit of 8 cards. I see reports of people using 13 on windows.
full member
Activity: 132
Merit: 100
I can see where having a Cable Y adapter w/maybe an inline chip would be a great thing, because it would free up the limited motherboard selections that allow 4 or 6 GPU's

However, a main problem is the Windows OS + card drivers seem to currently be limiting the number of popular GPU cards to 6.
legendary
Activity: 952
Merit: 1000
Dude, your avatar kinda freaks me out.

On the other note - what's the point of "13 gpus" if your power supply cannot handle it?
Or you will have multi psu setup as well?
You can get large PSUs that can handle lots of cards.
cp1
hero member
Activity: 616
Merit: 500
Stop using branwallets
It looks like it's passive, as in doesn't need external power.  But it's obviously not as simple as a Y-connector.
legendary
Activity: 2128
Merit: 1120
It's way too expensive but I like the idea.
sr. member
Activity: 406
Merit: 250
Dude, your avatar kinda freaks me out.

On the other note - what's the point of "13 gpus" if your power supply cannot handle it?
Or you will have multi psu setup as well?

Obviously.

and try to stop looking at it. Tongue
hero member
Activity: 980
Merit: 500
FREE $50 BONUS - STAKE - [click signature]
Dude, your avatar kinda freaks me out.

On the other note - what's the point of "13 gpus" if your power supply cannot handle it?
Or you will have multi psu setup as well?
sr. member
Activity: 406
Merit: 250
The problem is: does the Windows operating system recognising 10, 16 or more GPU?

I'm hearing reports of windows running with up to 13 GPU's. I have also heard that linux (which version or distro?) has a hard limit of 8 GPU's.

Subject to support in the bios or by the motherboard, I suppose.
legendary
Activity: 1134
Merit: 1005
The problem is: does the Windows operating system recognising 10, 16 or more GPU?
virtualization.
hero member
Activity: 896
Merit: 1000
The problem is: does the Windows operating system recognising 10, 16 or more GPU?
legendary
Activity: 1260
Merit: 1000
Drunk Posts
I have no idea. But maybe. Too expensive to be worth it, though.

Silly question: why does it have to be passive?

It doesn't have to be passive. It's just that if it is possible for it to be passive, it will allow for simple wiring of riser cables to work.

Its not completely passive. A PCIE x16 slot has 16 data lanes, but only 1 set of control signals. Each x1 slot requires a set of control signals.

See pinouts:

http://pinouts.ru/Slots/pci_express_pinout.shtml

I see. What is it for, anyways? Keeping timing in sync? and if so, Is it not possible to share the "control signals"?

I'm not completely certain... SMBus signals can be shared via a simple splitter, and I'm not sure if JTAG is necessary, but Reference Clock is going to need some circuitry to be split, and something probably needs to be done to the detect signals. I haven't read the PCI-E spec, so I'm going off pinouts and some guesses only.
sr. member
Activity: 406
Merit: 250
it's not passive, there are microcontrollers on the board.


It's listed as passive. Perhaps the microcontroller isn't important. Anyways, someone else here showed another one that is fully passive.
legendary
Activity: 2058
Merit: 1431
it's not passive, there are microcontrollers on the board.
legendary
Activity: 1344
Merit: 1004
Why do you need a powered riser? NO! NO! NO!.

Besides, the adapter already has a PCI-E power plug for supplemental power. Theres your power right there.
sr. member
Activity: 406
Merit: 250
I have no idea. But maybe. Too expensive to be worth it, though.

Silly question: why does it have to be passive?

It doesn't have to be passive. It's just that if it is possible for it to be passive, it will allow for simple wiring of riser cables to work.

Its not completely passive. A PCIE x16 slot has 16 data lanes, but only 1 set of control signals. Each x1 slot requires a set of control signals.

See pinouts:

http://pinouts.ru/Slots/pci_express_pinout.shtml

I see. What is it for, anyways? Keeping timing in sync? and if so, Is it not possible to share the "control signals"?
legendary
Activity: 952
Merit: 1000
legendary
Activity: 952
Merit: 1000
Silly question: why does it have to be passive?
legendary
Activity: 1260
Merit: 1000
Drunk Posts
Its not completely passive. A PCIE x16 slot has 16 data lanes, but only 1 set of control signals. Each x1 slot requires a set of control signals.

See pinouts:

http://pinouts.ru/Slots/pci_express_pinout.shtml
sr. member
Activity: 406
Merit: 250
Yeah. If this could be maass-manufactured. It could cost pennies per unit and be sold at reasonable prices for quite big profit.
legendary
Activity: 1582
Merit: 1002
HODL for life.
I always wanted to try this, or even the pci-e risers for servers.  I'm surprised with all the need for adding as many cards to a board that nobody has engineered a cheap solution that has like 16 ports.  I've seen the rackmount pci-e servers for GPU processing, but they are redonkulous expensive.
sr. member
Activity: 406
Merit: 250
YEAH! *gets enthused*

Except I don't have the money to buy a bunch of risers. Tongue
hero member
Activity: 503
Merit: 500
Only one way to find out Grin.
sr. member
Activity: 406
Merit: 250
Hey guys.

I've been looking around places, and I've found these adapters that seem to split a single PCI-E 16x slot into two PCI-E 8x slots.

They look like this.


As you can see, on the bottom is a standard PCI-E 16x slot, and on the top is two PCI-E 16x slots that are running at 8X speed.

They're fairly expensive. (80 bucks on ebay) But I noticed one VERY IMPORTANT thing that they say about it.



The adapter is PASSIVE! If you look at the photo, there doesn't seem to be much circuitry on the board!

This leads me to believe that the PCI-E spec allows for passive splitting of lanes into at least two slower-speed lanes. The only problem is that normally, this is mechanically impossible. But what if you were to buy a couple of these...


http://www.ebay.com/itm/Powered-PCI-E-16x-16x-Riser-Cable-with-Molex-/111078375273?pt=LH_DefaultDomain_0&hash=item19dcc97f69

And then cut up the cable, and re-wire it to split one slot into two or more lower speed slots?

Once power draw problems are resolved. I think it would be EPIC for running a crapton of mining cards on a single system at very low cost. But only if the PCI-E spec actually allows it.

What do you think? Does anyone know? I haven't seen anything like this done before, so I don't know how well it would work.

~Re
Jump to: