Pages:
Author

Topic: GPU Mining on OS X Using poclbm - page 2. (Read 78783 times)

sr. member
Activity: 378
Merit: 255
June 15, 2011, 08:25:56 PM
#60
For those of you who are not comfortable with Xcode and compiling your own software, please consider using the ready-made app packaged alternatives.

 Cool

Thanks MacCompiler. Fortunately you don't have to use XCode for anything, its just a little more straightforward then asking individuals to install a compiler. Since poclbm is in python you are only really compiling dependencies. Mac OS X actually comes with more of the dependencies preinstalled than any other system I know of (numpy, OpenCL drivers).
newbie
Activity: 53
Merit: 0
June 15, 2011, 08:11:17 PM
#59
For those of you who are not comfortable with Xcode and compiling your own software, please consider using the ready-made app packaged alternatives.

 Cool
newbie
Activity: 35
Merit: 0
June 15, 2011, 04:02:58 AM
#58
MacBook Pro (6,2 NVIDIA GeForce GT 330M), Snow Leopard, -f 1 -w 64

7845 khash/s
sr. member
Activity: 378
Merit: 255
June 14, 2011, 08:42:04 PM
#57
old: MacPro4,1 - OSX Snow Leopard (10.6.Cool - '-f 30 -w 256' - Radeon 5870

180MH/s



new: MacPro4,1 - OSX Lion (DP4) - '-f 30 -w 256' - Radeon 5870

300MH/s  Grin Grin Grin Grin




This is good news for Mac miners!
newbie
Activity: 7
Merit: 0
June 14, 2011, 06:56:33 PM
#56
Good to know. Thanks.
newbie
Activity: 17
Merit: 0
June 14, 2011, 06:52:24 PM
#55
I'm curious if you have any theories why Lion is so much faster than Snow Leopard with the same GPU?

New OpenCL implementation (for the new Final Cut)
newbie
Activity: 7
Merit: 0
June 14, 2011, 05:40:09 PM
#54
I'm curious if you have any theories why Lion is so much faster than Snow Leopard with the same GPU?
newbie
Activity: 17
Merit: 0
June 14, 2011, 04:35:33 PM
#53
old: MacPro4,1 - OSX Snow Leopard (10.6.Cool - '-f 30 -w 256' - Radeon 5870

180MH/s



new: MacPro4,1 - OSX Lion (DP4) - '-f 30 -w 256' - Radeon 5870

300MH/s  Grin Grin Grin Grin


sr. member
Activity: 378
Merit: 255
June 09, 2011, 06:32:11 PM
#52
Unfortunately only 8xxx or newer cards are supported as far as I know. This has nothing to do with poclbm, just not a feature in earlier cards.
newbie
Activity: 5
Merit: 0
June 09, 2011, 06:25:17 PM
#51
@rethaw thanks. Now that I'm back, I see that my second card is not on the device list. I wonder how to deal with that.

it is an NVIDIA GEForce 7300 GT  - 256MBVram, device id 0x0393 . I wonder also if in this kind of box the PCIe lane settings (a weird OSX setting most people haven't seen i bet) could be fiddled with to increase throughput.

EDIT - it appears the 7300 GT does not support CUDA which seems to preclude using it for this. Is there any way to run a GPU which is not compatible with CUDA?
sr. member
Activity: 378
Merit: 255
June 09, 2011, 05:51:26 PM
#50
Run poclbm with the "-help" flag.

Code:
Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -u USER, --user=USER  user name
  --pass=PASSWORD       password
  -o HOST, --host=HOST  RPC host (without 'http://')
  -p PORT, --port=PORT  RPC port
  -r RATE, --rate=RATE  hash rate display interval in seconds, default=1
  -f FRAMES, --frames=FRAMES
                        will try to bring single kernel execution to 1/frames
                        seconds, default=30, increase this for less desktop
                        lag
  -d DEVICE, --device=DEVICE
                        use device by id, by default asks for device
  -a ASKRATE, --askrate=ASKRATE
                        how many seconds between getwork requests, default 5,
                        max 10
  -w WORKSIZE, --worksize=WORKSIZE
                        work group size, default is maximum returned by opencl
  -v, --vectors         use vectors
  --verbose             verbose output, suitable for redirection to log file
  --platform=PLATFORM   use platform by id

http://forum.bitcoin.org/?topic=4122.0
full member
Activity: 210
Merit: 100
firstbits: 121vnq
June 09, 2011, 05:43:35 PM
#49
if you run poclbm without any flags it will enumerate the devices. That would be your processor. In general this is not an effective way of mining coins, but you can run one instance for the processor and another for the GPU.

Ah so just to be clear, i can open two terminal windows and run one command on the GPU and one on the CPU? Can i tell a third instance to use the second graphics card?

Yes, though CPU mining is useless at this point, so may as well just have one process for each card
newbie
Activity: 5
Merit: 0
June 09, 2011, 05:41:20 PM
#48
Could you explain how the -d device flag works? I don't know where the documentation (or lack thereof is). i may try to contribute back to the project by pushing some better docs on github if i can. I may not be a code ninja but i can write docs.
newbie
Activity: 5
Merit: 0
June 09, 2011, 05:38:16 PM
#47
if you run poclbm without any flags it will enumerate the devices. That would be your processor. In general this is not an effective way of mining coins, but you can run one instance for the processor and another for the GPU.

Ah so just to be clear, i can open two terminal windows and run one command on the GPU and one on the CPU? Can i tell a third instance to use the second graphics card?
newbie
Activity: 5
Merit: 0
June 09, 2011, 05:36:21 PM
#46
Awesome, are you doing pooled or solo?
I jumped onto Deepbit and get about .001 or .002 per block roughly. Not too shabby compared to the idleness of the card normally Smiley

I am wondering if I can get my other graphics card to run this, it is older. It's a GeForce 7300 or 6300 (don't have it in front of me right now Smiley

Also what is the best way to run something on the OSX CPU? The CPU is jelly right now that the GPU gets all the action. It would only get 1/20th the hashes of the GPU but I would like to try running it in a quieter pool than deepbit and see what happens.
sr. member
Activity: 378
Merit: 255
June 09, 2011, 03:01:45 PM
#45
if you run poclbm without any flags it will enumerate the devices. That would be your processor. In general this is not an effective way of mining coins, but you can run one instance for the processor and another for the GPU.
newbie
Activity: 7
Merit: 0
June 09, 2011, 03:00:00 PM
#44
I'm getting about 3300 khash/sec on a Mac Pro 2.66GHz Quad-Core Xeon with NVIDIA GeForce GT 120 graphics.

Nice, can you compare the rate to what your processor gets? You can do this in poclbm using the "-d x" flag, where x is the number assigned to that device.

Most interesting -- when I use the "-d 1" option (which I presume is the processor) I'm getting about 5000 khash/sec. The Activity Monitor show all 4 CPUs are doing their thing.
sr. member
Activity: 378
Merit: 255
June 09, 2011, 02:54:31 PM
#43
I'm getting about 3300 khash/sec on a Mac Pro 2.66GHz Quad-Core Xeon with NVIDIA GeForce GT 120 graphics.

Nice, can you compare the rate to what your processor gets? You can do this in poclbm using the "-d x" flag, where x is the number assigned to that device.
newbie
Activity: 7
Merit: 0
June 09, 2011, 02:52:02 PM
#42
Thanks for the reply.

I tried the changes you made to the bitcoin.conf -- only those 3 lines (but with my username and password, of course)

Still getting the "Bitcoin is not connected!" message.  Hmmmm....

Running a Mac Pro 2.66GHz Quad-Core Xeon, OSX 10.6.7.

OK -- it took several things to get running.
First, when I originally installed and launched Bitcoin it did not open connections and download blocks. The solution was found at: http://forum.bitcoin.org/index.php?topic=10099.0
Second, I did need to add the "addnode=69.164.218.197" to my bitcoin.conf. I also needed to add "server=1" to the bitcoin.conf file.

I'm getting about 3300 khash/sec on a Mac Pro 2.66GHz Quad-Core Xeon with NVIDIA GeForce GT 120 graphics.
sr. member
Activity: 378
Merit: 255
June 09, 2011, 02:16:27 PM
#41
I got everything working, and I'm only pulling in 2800kH/s on my mid-2009 MBP.  Is that to be expected with the GeForce 9600M GT inside it?

The nvidia cards are apparently optimized for floating point calculations, while the Radeon's are optimized for integer. The hashing algorithm relies heavily on integer calculations so the AMD/ATI cards seem to have nvidia beat for bitcoin mining.

You could post your results to https://en.bitcoin.it/wiki/Mining_hardware_comparison as there aren't any results for your video card yet. It looks like the closest thing is another MBP listed on there with a 9400M at 1.9 MHash/s.
Pages:
Jump to: