Pages:
Author

Topic: 8 GPU limit linuxcoin (Read 4626 times)

vip
Activity: 574
Merit: 500
Don't send me a pm unless you gpg encrypt it.
November 10, 2011, 12:33:24 PM
#29
The 32bit claim is false. I eventually got 5 GPUs working on 32-bit 11.04 Ubuntu Linux, without any hacking of the ATI drivers or APP SDK.

That's correct, didn't catch that before.  Both the 32bit and 64bit drivers should support at least 8 GPUs currently.

The driver's interface with the OS layer may use 32bit or 64bit addressing depending on the OS, but the binary bits of the driver are likely identical otherwise.  The newer Radeon HDs actually use a 256bit address space to address it's own memory, internally - the exchange with the OS is all serial communication (PCI is serial), so addressing the GPU components from the OS level is just about sending the right hardware address + commands.  

It could easily be a 16bit OS and address an infinite number of GPUs, as long as the driver handles interfacing that with the OS properly.

Awesome.  Diablo3d was the one that gave me my info, but I could have understood it wrong.  Good to know.
full member
Activity: 196
Merit: 100
November 10, 2011, 10:44:35 AM
#28
The 32bit claim is false. I eventually got 5 GPUs working on 32-bit 11.04 Ubuntu Linux, without any hacking of the ATI drivers or APP SDK.

That's correct, didn't catch that before.  Both the 32bit and 64bit drivers should support at least 8 GPUs currently.

The driver's interface with the OS layer may use 32bit or 64bit addressing depending on the OS, but the binary bits of the driver are likely identical otherwise.  The newer Radeon HDs actually use a 256bit address space to address it's own memory, internally - the exchange with the OS is all serial communication (PCI is serial), so addressing the GPU components from the OS level is just about sending the right hardware address + commands.  

It could easily be a 16bit OS and address an infinite number of GPUs, as long as the driver handles interfacing that with the OS properly.
legendary
Activity: 1876
Merit: 1000
November 09, 2011, 09:38:13 PM
#27
I run several 5850systems with 4-7 of them on one Board without problems under win7 x64 (Home Premium and Proffessional)
Catalyst 11.4 - 11.10rc

I'd stay away from installing CCC by all means possible. When going beyond 4 gpus there is a problem that AMD is not attending to with CCC resulting in BSOD. Install drivers but stay away from CCC.

May I ask how you know the BSOD is attributed to the CCC.  I also install the CCC express install, I have 5 6950's running full tilt on win764 cgminer. and ltc'ing with the other 3 cores.
sr. member
Activity: 476
Merit: 500
November 09, 2011, 08:03:41 PM
#26
I run several 5850systems with 4-7 of them on one Board without problems under win7 x64 (Home Premium and Proffessional)
Catalyst 11.4 - 11.10rc

I'd stay away from installing CCC by all means possible. When going beyond 4 gpus there is a problem that AMD is not attending to with CCC resulting in BSOD. Install drivers but stay away from CCC.
hero member
Activity: 686
Merit: 500
November 09, 2011, 10:04:00 AM
#25
I run several 5850systems with 4-7 of them on one Board without problems under win7 x64 (Home Premium and Proffessional)
Catalyst 11.4 - 11.10rc
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 09, 2011, 10:02:26 AM
#24
Very awesome, Thank you for your quick reply.

I do not know why I was under the impression I could only run 4.

Peace

One of the more recent driver changes enabled 8 GPU.  I can't remember which one but using most recent driver will give you >4.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 09, 2011, 09:49:56 AM
#23
Both Windows and Linux are limited to 8 cores on 64bit and 4 cores on 32bit.

Is this right? I can use more than 4 gpus in windows 7 64 bit?Huh  This would be awesome

Yes.  I have run both 4x5970 and 3x5970 in Windows 7 x64.
sr. member
Activity: 476
Merit: 500
November 08, 2011, 07:02:16 PM
#22
Shit happens, I was wrong too.

I wouldn't trust the GD70 or GD80 boards to take that kind of abuse. The Big Bang and Supercomputer though may be able to handle it as they were built for closer to the abusive application that we miners use them for. You pay a hefty price for them though. I can speak in high regard for the Big Bang personally, mine runs 8 gpus with all slots populated and it doesn't seem to phase it.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 06:49:20 PM
#21

Nice reference.  I was completely wrong (it happens sometimes) looks like max draw is limited by slot size although the same physical connector is used.  So yeah likely a board w/ mostly 1x connectors is not expecting 100W+ draw from the slots.
sr. member
Activity: 476
Merit: 500
full member
Activity: 196
Merit: 100
November 08, 2011, 06:25:19 PM
#19
Wasn't speaking of length, but rather speed. x16 maxes at 75W with 25W required at startup and x8-x1 max at 25W.

That's incorrect - the PCI spec dictates available power, not the port length

PCI-e 1x, 4x, 8x, and 16x can all draw up to 75W

PCI is limited to 35W

The only difference between 1x and 16x is the number of available data lanes.
sr. member
Activity: 476
Merit: 500
November 08, 2011, 06:02:31 PM
#18
In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

Power load doesn't depend on length of PCIe slot.  Regardless of the number of data lanes all PCIe slots have same power connectors.

While a video card "can" pull 75W from the bus none of the externally powered ones do.  I measured my 5970s under load and it was ~30W.  So 5 of them would be 150W.  I do agree though that most MB despite the spec calling for 75W per slots assume nobody will need that much and have insufficient ability to handle that current.  I wouldn't try it without powered extenders even @ 150W.

Wasn't speaking of length, but rather speed. x16 maxes at 75W with 25W required at startup and x8-x1 max at 25W.
donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 05:40:39 PM
#17
In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

Power load doesn't depend on length of PCIe slot.  Regardless of the number of data lanes all PCIe slots have same power connectors.

While a video card "can" pull 75W from the bus none of the externally powered ones do.  I measured my 5970s under load and it was ~30W.  So 5 of them would be 150W.  I do agree though that most MB despite the spec calling for 75W per slots assume nobody will need that much and have insufficient ability to handle that current.  I wouldn't try it without powered extenders even @ 150W.

On edit: looks like I was wrong.  Learned something new today.  While the same power connector is used the spec does limit power draw depending on connector size.
1x = max 25W (10W @ startup)
4x = max 25W
8x = max 25W
16x = max 75W (25W @ startup)

Still I would point out high end cards generally don't pull 75W from the slot.  My 5970s pull about 30W.  The 75W limit is more for cards without dedicated power connector.
legendary
Activity: 1876
Merit: 1000
November 08, 2011, 05:36:28 PM
#16
In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.

well, the 5970 is not very power hungry..  the board has 5 full 16x slots so it should 'handle' the power.
https://plus.google.com/u/0/photos/112408294399222065988/albums/5658727447810944545

sr. member
Activity: 476
Merit: 500
November 08, 2011, 05:33:22 PM
#15
In my experience, it's enough a pain in the ass to get 8 gpu's to work well on one rig, moving past 8 sounds like more of a headache than what its worth. Also, running 8 gpus on a board requires 400w of power being pushed through the mobo to the pcie slots, 4 16x @ 75W and 4 8x or 4x or 1x @ 25W. That's a ton of power running through the board and only a few $350 + boards can handle it. Unless your using powered risers, which scare the crap out of me.
legendary
Activity: 1876
Merit: 1000
November 08, 2011, 05:22:57 PM
#14
jjiimm_64, when you add the 5th card what happens? 

Does aticonfig see the card at all, or seem to just ignore it?
Does 'lspci' list all 5 cards?

I'm curious if firing up a second X session tied only to the 5th card may be able to get around the problem..

Well I remember it wasn't pretty.  ( my technical assessment).

Seriously i cant remember exactly..  i think it came up with only one of them, one 5970 that is.  I didn't think it would work anyway and really did not put much effort into it.   

But now I have 12 rigs (soon to be 13)  and getting 5 5970's into a rig looks really appealing..

thxs for everyone's comments...  i agree it cannot be a hardware issue

donator
Activity: 1218
Merit: 1079
Gerald Davis
November 08, 2011, 05:08:22 PM
#13
For example, the more-than-one X session may work, or it may be possible to use a xen virtualized system to appear as a fully-unique system with just the 9th+ GPUs

That might work or simply cut the # of GPU in half and put half in each virtual machine to load balance memory & cpu.

Speaking of x session.  BLECH!  There is absoultely no reason why openCL should require X-server at all.  Something many developers have complained about.  More laziness on AMD part has hard couple x-system to openCL drivers.  I mean anyone think a super computer is going to be running x-session on each node? 

Of course you also have the 100% bug too.

So really three asinine and lazy "gotchas" in AMD OpenCL implementation
1) any GPU limit (for x64 systems).
2) requirement for x-session.  Hey AMD I am not actually doing any graphical work on my  "graphics processing unit".
3) 100% CPU bug.

Maybe someday they will all be fixed, probably just in time for FGPA to replace the last of the GPU miners.  Roll Eyes
full member
Activity: 196
Merit: 100
November 08, 2011, 05:02:28 PM
#12
Even if the driver is crippled by laziness, it should be possible to get around it in linux, though it might take some effort.

For example, the more-than-one X session may work, or it may be possible to use a xen virtualized system to appear as a fully-unique system with just the 9th+ GPUs

They probably set it figuring it gave enough room and then haven't put much thought to it yet, sorta like the old y2k 'issues'.

This is more akin to Bill Gates' famously quoted "640kb ought to be enough for anybody"

Incidentally it was only recently that the limit was increased from just four GPUs - they should have realized then that 8 certainly wouldn't be enough for long.
vip
Activity: 574
Merit: 500
Don't send me a pm unless you gpg encrypt it.
November 08, 2011, 04:53:19 PM
#11
Its a hardcoded driver issue on both ati and nvidia.  The 32bit was for obvious reasons, the 64bit seems less clear why.

Exactly any claim of resource issue is BS.

x86-64 supports 2^64 bit memory space.
Current CPU only support 2^48 bit memory space.
Windows further limits that to 2^44 bit memory space for some unknown reason but that is still more than enough.

There is sufficient virtual memory space to map couple thousand video cards.


You might think it BS, but its no BS on the part of the user.  Its on the part of ati/nvidia.  I'm sure it boils down to the fact that 99.9% of people aren't ready to use >8 gpu cores in a machine.  Power, slots, temperatures, price are all concerns.  They probably set it figuring it gave enough room and then haven't put much thought to it yet, sorta like the old y2k 'issues'.  Write up your friendly GPU manufacturer and see what they have to say about it.
sr. member
Activity: 378
Merit: 250
"Yes I am a pirate, 200 years too late."
November 08, 2011, 04:47:41 PM
#10
Yea, I'm not speaking for ATI or whatever I just know the limit is suppose to be limited by the software (drivers) access to the hardware.  But I don't even know enough about the bit by bit issues to know that I don't know. Smiley
Pages:
Jump to: