Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1077. (Read 3426918 times)

full member
Activity: 308
Merit: 146
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian


They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it.

I'm running a laptop with two 680m's and experiencing a similar issue. Autodetect was pretty lame in what it chose (it tried but I was only getting like 80kh/s) and right now I'm running two instances, one per GPU, one at K14x16, the other at K70x2 - the K14x16 one works well at around 117kh/s, the 70x2 config is a little slower at around 104kh/s

I tried running both with K14x16, however, I have a feeling I was running out of VRAM (these are the 2GB 680m's) because my drivers would crash and both cards would drop to like 5kh/s

Memory utilization at the current settings is around 87% on both cards according to HWinfo

I'm happy with around 215kh/s from a laptop with NVidia GPUs :]

That said, if anyone knows a config that may work better on the 680m, I'm all ears Smiley
newbie
Activity: 56
Merit: 0
When I start the program it just shuts down immediately.

I navigated via cmd and when running it, it just tells me about the program, the developed and how to donate - it never starts

halp!
full member
Activity: 161
Merit: 100
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian


They are detected independently in any other program. I did run two separate batch files of cudaminer for the same pool for each GPU and that does work, but one of the instances accepts blocks much quicker than the other. CGminer utilizes both GPU's in the same instance and works quickly as well, albeit with a hashrate of about 65 khash/s per card. I would like to get both cards running within the same instance in cudaminer but I cannot find a configuration that allows it.
hero member
Activity: 756
Merit: 502
I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1

ever tried running two separate instances of cudaminer? one with -d 0 and one with -d 1 maybe?

do tools like CUDA-z and GPU-z show both chips separately?

Christian
full member
Activity: 161
Merit: 100
 I have been attempting to get Cudaminer to work with both of my GPU's in Sli (780M), but each time it starts up it will only detect either GPU 0 or 1, even with a -d 0,1 or similar configuration. A single GPU pushes out a hashrate of around 150 khash/s, though it would be nice to double this by utilizing both of them. Has anyone else experienced this issue and found a solution?
hero member
Activity: 756
Merit: 502
Do you think that I'll I get any profit mining litecoins with a 260gtx??

I scrapped both my GTX 260 (SLI config) because I was getting only 40 kHash/s from each one under Windows 7.
They might have worked better on Linux or Windows XP (due to a different driver model used there).

I also scrapped a GTX 460 (too old, 96 kHash/s).

I got myself instead:
a GTX 560 Ti 448 core edition (used), a GTX 560 Ti (new) and a GTX 660 Ti (new) and a GT 640 (the new model with GK 208 chip).

Now the machine can do 600 kHash @ 800 Watts. And it's good for driving MANY monitors and for gaming as well Wink And it isn't actually profitable, considering the local electricity costs and the increased mining difficulty. But I am doing this for fun.

Christian
hero member
Activity: 756
Merit: 502
use -l K,K to tell both cards to run Kepler kernels

or

-l K15x16,K15x16

to skip autotune alltogether.

Also try enabling

-C 2,2

to get some 5% boost (from the texture cache)
newbie
Activity: 3
Merit: 0
I'm trying to run cudaminer for the first time on my GTX 480 SLI cards.  This is Windows 8 (not 8.1 to my knowledge) with latest beta driver (331.65) from Geforce experience.  When I autotune it crashes after picking a kernel and I'm not sure how to get useful info.  I do have some of the CUDA SDK installed on my computer, but haven't touched it recently.  Was years ago I last programmed something CUDA, and installed it more recently on a whim.  The last few lines of a failing run with debug turned on is below...

>cudaminer -D -H 1 -i 1,0 -o stratum+tcp:// -O :

...
[2013-11-06 20:38:56] 353:     |     |     |     |     |     |     |     |     |     |     |     |
   |     |     |      kH/s
[2013-11-06 20:38:56] 354:     |     |     |     |     |     |     |     |     |     |     |     |
   |     |     |      kH/s
[2013-11-06 20:38:56] GPU #0:   71.63 khash/s with configuration F7x8
[2013-11-06 20:38:56] GPU #0: using launch configuration F7x8


I tried the idea of limiting which kernels to use.  When I picked Fermi (which my card is) it crashed again.  When I picked Kepler I got farther, but this still seems wrong not least of which because this is not a Kepler based card and the errors from card 1.

>cudaminer -H 1 -i 1,0 -l K -o stratum+tcp:// -O :
           *** CudaMiner for nVidia GPUs by Christian Buchner ***
                     This is version 2013-11-01 (alpha)
        based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
               Cuda additions Copyright 2013 Christian Buchner
           My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-11-07 18:18:25] 2 miner threads started, using 'scrypt' algorithm.
[2013-11-07 18:18:25] Starting Stratum on stratum+tcp://
[2013-11-07 18:18:27] GPU #1: GeForce GTX 480 with compute capability 2.0
[2013-11-07 18:18:27] GPU #1: interactive: 0, tex-cache: 0 , single-alloc: 0
[2013-11-07 18:18:27] GPU #0: GeForce GTX 480 with compute capability 2.0
[2013-11-07 18:18:27] GPU #0: interactive: 1, tex-cache: 0 , single-alloc: 0
[2013-11-07 18:18:27] GPU #1: Performing auto-tuning (Patience...)
[2013-11-07 18:18:27] GPU #0: Given launch config 'K' does not validate.
[2013-11-07 18:18:27] GPU #0: Performing auto-tuning (Patience...)
[2013-11-07 18:19:07] GPU #0:  201.97 khash/s with configuration K15x16
[2013-11-07 18:19:07] GPU #0: using launch configuration K15x16
[2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 7680 hashes, 0.19 khash/s
[2013-11-07 18:19:07] GPU #0: GeForce GTX 480, 15360 hashes, 128.99 khash/s
[2013-11-07 18:19:36] GPU #1:  213.19 khash/s with configuration F30x8
[2013-11-07 18:19:36] GPU #1: using launch configuration F30x8
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 0.11 khash/s
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480, 7680 hashes, 105.13 khash/s
[2013-11-07 18:19:36] GPU #1: GeForce GTX 480 result does not validate on CPU!
[2013-11-07 18:19:40] GPU #0: GeForce GTX 480, 6174720 hashes, 190.56 khash/s
[2013-11-07 18:19:40] accepted: 1/1 (100.00%), 295.70 khash/s (yay!!!)
[2013-11-07 18:19:43] GPU #0: GeForce GTX 480, 698880 hashes, 184.91 khash/s
[2013-11-07 18:19:43] accepted: 2/2 (100.00%), 290.04 khash/s (yay!!!)
[2013-11-07 18:19:46] GPU #1: GeForce GTX 480, 2181120 hashes, 209.68 khash/s
[2013-11-07 18:19:47] accepted: 3/3 (100.00%), 394.59 khash/s (yay!!!)
Ctrl-C
[2013-11-07 18:19:49] workio thread dead, waiting for workers...
[2013-11-07 18:19:49] worker threads all shut down, exiting.


Thanks for any help in advance.

----------------------------------

Just realized SLI was turned off in the driver.  I'll try with it on in a bit...
member
Activity: 112
Merit: 10
Do you think that I'll I get any profit mining litecoins with a 260gtx??
sr. member
Activity: 247
Merit: 250
Nope, drivers are all up to date on my 9400gt and gtx 295.  I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m).  I'm guessing it's something with the configuration value.  Any more suggestions?

there new switches like -K and -F Christian posted them a couple pages back. I would suggest you try them. You could also disable the 9400gt and try just mining with the gtx295, then do the same with the other one. to try to determine if one is working or not
newbie
Activity: 13
Merit: 0
Nope, drivers are all up to date on my 9400gt and gtx 295.  I've gotten it to work on my laptop (with gt 520m) and my brother's macbook (gt 650m).  I'm guessing it's something with the configuration value.  Any more suggestions?
sr. member
Activity: 247
Merit: 250
Nope, doesn't work.  I autotuned and now have the values -l L30x3,L30x3,L4x2    Still has the validation issue with the cpu

make sure drivers are up to date.
newbie
Activity: 13
Merit: 0
Nope, doesn't work.  I autotuned and now have the values -l L30x3,L30x3,L4x2    Still has the validation issue with the cpu
sr. member
Activity: 247
Merit: 250
Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up.  The only parameter I have is --no-autotune along with my url, worker & pass.   What is going on?  I've seen others have this problem but no clear answers.  I've fiddled with the -D values but no use.
[2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU!
[2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!

Must let it autotune. then use -l once it find a valid configuration..Plenty of answers to this..means you've chosen an invalid configuration.
newbie
Activity: 13
Merit: 0
Hey, I got cudaminer running perfectly on my desktop except this error keeps on showing up.  The only parameter I have is --no-autotune along with my url, worker & pass.   What is going on?  I've seen others have this problem but no clear answers.  I've fiddled with the -D values but no use.
[2013-11-05 17:33:23] GPU #0: GeForce GTX 295 result does not validate on CPU!
[2013-11-05 17:33:27] GPU #1: GeForce GTX 295 result does not validate on CPU!
hero member
Activity: 756
Merit: 502
Auto tune keep picking up Titan kernal and gave me T575x1 and T576x1 so far. Both which only give 260kh/s.

Okay, i made some improvements to the Titan kernel, which brought my khash/s up from 55 kHash (achieved with the Kepler Kernel) to 62 kHash on the GT 640 (GK208 chip, Compute 3.5). Maybe you want to try this binary on the GTX 780? Let me know how I should send it to you.

Christian
sr. member
Activity: 247
Merit: 250
this image suggests to use http:// and not stratum+tcp://

http://imgur.com/r/all/cesAJhA

Maybe the stratum would require a different port number?


they have it missed typed. if you look at the configuration on their page it says it should be http://p2pool.org:9327
hero member
Activity: 756
Merit: 502
this image suggests to use http:// and not stratum+tcp://

http://imgur.com/r/all/cesAJhA

Maybe the stratum would require a different port number?
newbie
Activity: 32
Merit: 0
I am getting this error using cudaMiner:



I tried different pools, different instructions, but it seems it just won't connect to the server. And I am sure it's not an issue with my net/port-forwarding/firewall since all my other miners work without a problem...
Anyone has any idea what might be causing this?

i have this same problem, is there a cure? or have i missed it somewhere?
newbie
Activity: 6
Merit: 0
Well with K24x16 (-i 0 -l K24x16 -C 2) I get 271kh/s @ 63TDP.

I usually run -i 0 -l K42x6 -C 2 which gives me a steady 330kh/s @ 75% TDP. So far mined 6 blocks of GLD with this.

Auto tune keep picking up Titan kernal and gave me T575x1 and T576x1 so far. Both which only give 260kh/s.

When I was getting those ridulously high numbers I was using -i 0 -l K6x42 -C 2 (for 543kh/s @ 25% TDP) and -i 0 -l K6x42 -C 2 (for 2010kh/s @ 25% TDP). Wierd part is K42x42 also gave the same results. Christian, is it maybe we are not utilizing these cards properly? Like if we changed the way the code was calculating the hash, the performance can be more? When its running those crazy numbers, my PC actually slows down (it does not when running K42x6) and I cannot even use the browser.
Jump to: