Author

Topic: 16x extender cable problem in MSI BB Marshal mother board (Read 3142 times)

newbie
Activity: 28
Merit: 0
Good point, even so, handling bandwidth requests for 8 different slots simultaneously would surely add some sort of lag.
A motherboard isn't an infinite lane highway.
hero member
Activity: 602
Merit: 500
I never implied his 8 card implentation would work in Linux.
I'd like him to test it just to see personally.
However, with 8 cards over PCI-E that's a total aggregate bandwidth of 128GB/s
More than double what AMD's HyperTransport can currently do, and Intel's Quickpath Interconnect.

If there's some magical chip on a motherboard that can handle aggregate bandwidth that high, by all means, let me know.

Well again as I mentioned, people are reporting full hashing rates utilizing x1 lanes. An x1 lane has bandwidth of 1GB/sec maximum, so unless there are some magical x1 lanes that can utilize x16 bandwidth, or for some reason cards mining request the full bandwidth of their PCI-E slot for no reason other than that they have it available, bandwidth is not a concern.
newbie
Activity: 28
Merit: 0
I never implied his 8 card implentation would work in Linux.
I'd like him to test it just to see personally.
However, with 8 cards over PCI-E that's a total aggregate bandwidth of 128GB/s
More than double what AMD's HyperTransport can currently do, and Intel's Quickpath Interconnect.

If there's some magical chip on a motherboard that can handle aggregate bandwidth that high, by all means, let me know.
hero member
Activity: 602
Merit: 500
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?

I don't even think it's 100% a software limitation.
It may very well be a hardware limitation, because of the amount of bandwidth needed/available to run that many devices over PCI-E.

I find bandwidth difficult to accept, why would it work under linux but not windows if there was a physical limitation? Moreover many users report 100% hashrates running cards via x1 PCI-E slots, which should mean theoretically you could run > 16 cards.
newbie
Activity: 28
Merit: 0
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?

I don't even think it's 100% a software limitation.
It may very well be a hardware limitation, because of the amount of bandwidth needed/available to run that many devices over PCI-E.
hero member
Activity: 602
Merit: 500
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.

I'm not talking about practical application, I am talking about software limitations. 4 Dual GPU cards still act as 8 physical GPUs. Are you saying that if I put 4 GPUs in my windows machine and then bought a PCIE Raid controller that windows would not be able to recognize the 5th PCIE device?
newbie
Activity: 28
Merit: 0
I just can't see, in a setup like that, a point where one or more cards isn't sitting idle because it has to wait for free bandwidth on the bus to transfer it's information, causing lag/latency internally.
newbie
Activity: 28
Merit: 0
I am using power from PSU itself. 4 pci 6/8 pin attached to the psu itself. Also another 2 cable was given with each has 1 pci 6& 1 pci 6+2.
So, with one power supply, i connect 4 cards which all draw power from psu itself.

Regardless of that fact PCI-E slots provide some power to the cards, whether they're externally powered or not.
legendary
Activity: 1855
Merit: 1016
I am using power from PSU itself. 4 pci 6/8 pin attached to the psu itself. Also another 2 cable was given with each has 1 pci 6& 1 pci 6+2.
So, with one power supply, i connect 4 cards which all draw power from psu itself.
newbie
Activity: 28
Merit: 0
I am using cooler master silent pro gold 1200 W, 2 nos. giving me a total of 2400W.
Also, i don't understand how it will blow FSB?

Motherboards provide power over the PCI-E slots to whatever's plugged into them.
Upwards of 75w per slot.

75x8=600w

If your motherboard tries to pull 600w through it, it'll probably fry internal connections/blow out the bus.


Also PCI-E 2.0 X16 slots can provide 128 Gb/s of bandwidth
It has 16 lanes, can do 8Gb/s per lane.

Let's say you run at the max speed of each slot...
128Gb/s * 8 = 1024 Gb/s
That's over a Tb/s

Do you think your motherboard can internally handle data transfers that high?
Now, that's if you're loading these cards 100%, which hashing doesn't really do, but still, in order to use them all simultaneously hashing you will probably pull some impressive bandwidth numbers.
Possibly high enough to blow out the bus/stall the system/crash the system.
legendary
Activity: 1855
Merit: 1016
I am using cooler master silent pro gold 1200 W, 2 nos. giving me a total of 2400W.
Also, i don't understand how it will blow FSB?
newbie
Activity: 28
Merit: 0
Dishwara, switch to Linux.
If this is going to be a dedicated machine for shareholders, you'll be much better off.
Secondly...
How in the hell are you powering that behemoth?
newbie
Activity: 28
Merit: 0
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.

That forum post on Nvidia.com is about 4x Dual GPU cards.
OP is trying to use 8 single GPU cards.

The difference is...
They only use 4 PCI-E slots
Dishwara is trying to literally use 8 PCI-E slots

Big difference in practical application.
member
Activity: 126
Merit: 60
I can see that filling all 8 PCIe slots would require tremendous power.  Probably enough to trip a 15-20 Amp fuse in a standard home.

full member
Activity: 154
Merit: 100
In fact, if this is a dedicated rig with shareholders wtf are you even doing thinking about using Windows?

Or making mining as expensive and complicated as possible by using a BigBang Marshall?  Huh
hero member
Activity: 602
Merit: 500
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.

http://forums.nvidia.com/index.php?showtopic=97478 claims differently. I have no first hand experience, so I can't say directly, but I see noe reason why windows would set such a limit.

/agreed as to possibly destroying the FSB though. Consider an externally powered PCI-E riser or something.
hero member
Activity: 927
Merit: 1000
฿itcoin ฿itcoin ฿itcoin
8 5870's on that mobo and you're running risk of blowing the fsb.  Shocked

Also, Windows is supposedly capped at 4 gpus, use linux, especially if this is a dedicated rig paid for by shareholders.
sr. member
Activity: 378
Merit: 250
I'm almost positive that I read in another forum post Windows is capped at 4 GPU's.  If you want 8 you'll need to go with Linux.
legendary
Activity: 1855
Merit: 1016
My mother board is http://www.msi.com/product/mb/Big-Bang-Marshal--B3-.html

It has 8 pcie ports. The cards i use to mine are 5870. No crossfire.
Windows 7 32 bit, using 11.5 catalyst & 2.4 APP latest drivers.
One monitor & others cards with dummy connector.

I connected one card to 1st slot pcie_1 using 16x extender cable & in next slot connected another card using 1x-16x extender cable.
Both worked fine.
If i connect directly card to board then it takes 2nd slot also. so have to use cable.

I connected 3rd card in pcie_3 slot using 1x-16x extender cable. windows didn't detect card, so tried with another cable & windows didn't detect.
After some thought used 16x extender cable to connect 3rd card & windows detected.

after some time of mining again tried with 1x-16x extender cable, windows didn't detect card.

It seems i have to connect alternate cables in alternate slots for windows to detect.

pcie_1, pcie_3, pcie_5, pcie_7 with 16x extender cable &
pcie-2, pcie_4, pcie_6, pcie_8 with 1x-16x extender cable to connect 8 , 5870 cards.

Am i right?
why windows won't detect the card in alternate slot?
I don't have any more cards to try & also 16x extender cables.

The motherboard has 4 switches to switch on & off pcie. only 4 switches from PGE1, PGE2, PGE3, PGE4.
switching on/off PGE2 & others have no effect.

The rig is not my own, but its my share holders mining rig & i can't take risk.

Please some one help me by giving advice in this odd behavior.
Will same problem happen in Linux?
I will be surely able to run 8 cards by using alternative cables in alternative slots?

Thanks in advance.
Jump to: