Pages:
Author

Topic: Two HD 5870 in Linux (Debian) performing no better than one (Read 6297 times)

newbie
Activity: 31
Merit: 0
As a nice bonus, CPU load is way down. With SDK 2.2, each miner consumed 100% CPU, but now it is barely noticeable.
Yeah I noticed that as well. Perhaps the two are related.

Hopefully these issues will be fixed in SDK 2.4, which shouldn't be too far out if they keep the same release pace. I know these issues (multi-gpu on linux and high CPU use) have been brought up at the AMD forums, so let's hope they got it fixed and we don't have to wait for SDK 2.5.
sr. member
Activity: 520
Merit: 253
555
Wouldn't you know it, it works fine with SDK 2.1 Smiley

I used to have the same problem, and SDK 2.1 does indeed fix it Smiley The driver version is 10.12 as it is the latest that works on Gentoo, with Linux kernel 2.6.37.4.

I guess I was reluctant to try this SDK, since 2.2 worked much better with HD5570 (also using an older version of poclbm), but a few Mhash/s is not a bad price for getting the full setup to work.

As a nice bonus, CPU load is way down. With SDK 2.2, each miner consumed 100% CPU, but now it is barely noticeable.
newbie
Activity: 48
Merit: 0
So how do you download ATI Stream/APP SDK 2.1 now?  I can't find it on the AMD site or on a Google search.

Current link for 32bit linux:
http://developer.amd.com/Downloads/ati-stream-sdk-v2.3-lnx32.tgz

Just replace version number:
http://developer.amd.com/Downloads/ati-stream-sdk-v2.1-lnx32.tgz

Thanks, dbitcoin!  Now, why didn't I think of that...
hero member
Activity: 726
Merit: 500
Wouldn't you know it, it works fine with SDK 2.1 Smiley

I'm glad to hear you got this working.  I got lucky as the fglrx version that was automatically installed from the repository was 10.9.  In m0mchil's miner thread, there is plenty of discussion that SDK 2.1 is the least problematic, so I started with that as well.
hero member
Activity: 742
Merit: 500
BTCDig - mining pool
So how do you download ATI Stream/APP SDK 2.1 now?  I can't find it on the AMD site or on a Google search.

Current link for 32bit linux:
http://developer.amd.com/Downloads/ati-stream-sdk-v2.3-lnx32.tgz

Just replace version number:
http://developer.amd.com/Downloads/ati-stream-sdk-v2.1-lnx32.tgz

newbie
Activity: 48
Merit: 0
So how do you download ATI Stream/APP SDK 2.1 now?  I can't find it on the AMD site or on a Google search.
newbie
Activity: 31
Merit: 0
Wouldn't you know it, it works fine with SDK 2.1 Smiley

On OpenSUSE I tried all the SDK's (2.1, 2.2 and 2.3), but that was with catalyst 11.2. On Debian with catalyst 10.9 it works with SDK 2.1.

Still a bit of a pain to have to use an old SKD/driver set (I do some OpenCL developement myself so it matters a bit to me), but at least it's working Smiley
hero member
Activity: 726
Merit: 500
Saw the same shit with 5970s and SDK 2.2 and 2.3.
SDK 2.1 + fglrx 10.9 - 10.11 works fine.

I was going to suggest dropping the SDK back to 2.1, but he said in the OP that he already tried that.  All of my miners run fglrx 10.9 and SDK 2.1 and I have no problems.
sr. member
Activity: 406
Merit: 257
Saw the same shit with 5970s and SDK 2.2 and 2.3.
SDK 2.1 + fglrx 10.9 - 10.11 works fine.
newbie
Activity: 48
Merit: 0
Interesting coincidence: this evening I just ran into the exact same problem!  This weekend I set up my first rig: one HD 5870 in Ubuntu 10.10.  Once I had that working,  I added a second one, and now that it's back up, it's doing the very same thing yours is -- the two cards seem to balance against each other, with the sum being no different from what I got from a single card (300-340 Mhash/sec depending on the overclocking).

I first worried that it was reaching a bottleneck, since it's using up all the CPU.

Don't have any new ideas, but I just wanted to confirm someone else is having the same problem.

Anyone know if this happens on other cards?  What would make it throttle back like that?
newbie
Activity: 31
Merit: 0
It's the same if I mine solo.

What if you get rid of -f altogether?
Still the same Sad

Man, every time I've tried switching to Linux there's something only Windows does. This time it's just multi-gpu, but that's a deal breaker considering I have two GPUs  Sad

Thanks to everyone for trying to help. If I do figure out what's wrong I'll be sure post an update.
hero member
Activity: 726
Merit: 500
Hopefully someone else who has experienced similar problems at some point will see the thread. I find it very odd considering I've had the exact same problem with openSUSE and Debian, but others seem to get multiple GPUs to work with no hassle.

I actually did have the same exact problem, and it was solved by setting the DISPLAY environment variable properly.  I don't know what to say.  Does it do the same thing when mining against bitcoind?
newbie
Activity: 31
Merit: 0
Oh yes, I have dual boot with Windows 7 x64 and Debian Testing x64. And the PSU is definitely beefy enough (1000 W).
sr. member
Activity: 434
Merit: 251
Every saint has a past. Every sinner has a future.
Obviously there is enough power delivered? Did you say you tried the same setup with the same power supply with windows and it worked?
newbie
Activity: 31
Merit: 0
Tried it. Moved the old xorg.conf and did aticonfig --initial --adapter=all again and the new file was identical to the old one.

Thanks for trying to help though Smiley. Hopefully someone else who has experienced similar problems at some point will see the thread. I find it very odd considering I've had the exact same problem with openSUSE and Debian, but others seem to get multiple GPUs to work with no hassle.
hero member
Activity: 726
Merit: 500
The only other thing I can think of trying is deleting the /etc/X11/xorg.conf file and running "aticonfig --initial --adapter=all" again.  I've found that, for some reason, aticonfig doesn't always create a proper xorg.conf file if one exists already.
newbie
Activity: 31
Merit: 0
Thanks, using aticonfig --odgc --adapter=all gives me the loads.

I've checked the DISPLAY variable using echo $DISPLAY. I've also tried setting the COMPUTE variable (which is preferred over DISPLAY by the SDK) to :0 but it's still the same.

Basically if I start two instances of poclbm the one I start first (say on the first card) will do about 250000 khash/s and the second one will do 50000 khash/s (it varies but those seem to be roughly the averages). The load using aticonfig reflects this (i.e. one GPU is like 80% and the other jumps around a bit but is much lower). If I then stop the first instance the load shifts to the second GPU, also reflected in aticonfig.

So it seems I am definitely starting the program correctly (i.e. each instance is using it's own GPU), but it's like the work gets serialized somewhere and is limited to the performance of one card.
hero member
Activity: 726
Merit: 500
Is there a way to do that in linux? My searches are coming up empty.

Code:
aticonfig --odgc

Make sure you really have the DISPLAY=:0 environment variable set.  Enter it on the command line before starting poclbm.py.
newbie
Activity: 31
Merit: 0
Is there a way to do that in linux? My searches are coming up empty.
newbie
Activity: 55
Merit: 0


I am absolutely sure that I am using a different device. I.e. in one terminal I've entered:
Code:
/poclbm.py --user=user1 --pass=mypass -v -d 1 -w 128 -f 120 -o mining.bitcoin.cz -p 8332

And in the other one:
Code:
/poclbm.py --user=user2 --pass=mypass -v -d 2 -w 128 -f 120 -o mining.bitcoin.cz -p 8332

So both the devices and user names on slush's server are different.


Did you check the load on both cards to really confirm both are working ?
Pages:
Jump to: