Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1061. (Read 3426921 times)

newbie
Activity: 12
Merit: 0
try to restart or set the -l auto and try... maybe you need to test with the -l until you get the best performance without errors in cpu check.
newbie
Activity: 48
Merit: 0
Code:
 *** CudaMiner for nVidia GPUs by Christian Buchner ***
                     This is version 2013-12-01 (beta)
        based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
               Cuda additions Copyright 2013 Christian Buchner
           My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-12-15 21:17:20] 1 miner threads started, using 'scrypt' algorithm.
[2013-12-15 21:17:20] Starting Stratum on stratum+tcp://stratum.gentoomen.org:3333
[2013-12-15 21:17:35] Stratum detected new block
[2013-12-15 21:17:35] Stratum detected new block
[2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti with compute capability 3.0
[2013-12-15 21:17:38] GPU #0: interactive: 1, tex-cache: 1D, single-alloc: 1
[2013-12-15 21:17:38] GPU #0: using launch configuration K14x16
[2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 7168 hashes, 3.25 khash/s
[2013-12-15 21:17:38] GPU #0: GeForce GTX 650 Ti, 200704 hashes, 833.73 khash/s
[2013-12-15 21:18:01] GPU #0: GeForce GTX 650 Ti result does not validate on CPU!

anyone know how to fix this problem?
newbie
Activity: 8
Merit: 0
Thank you !

Changed the config to cudaminer.exe -H 1 -i 0 -d 0,1 -l K112x2 -m 1 -C 1
and got
http://s5.postimg.org/3ne2n186b/112x2.jpg

It looks good, but tried those flags back with k8x24 and got
http://s5.postimg.org/mtr9q7oo3/8x24.jpg

Is that looking about right now?

newbie
Activity: 10
Merit: 0
Hi,

I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs.

Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on?

The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running.

Any ideas how it can be improved?

Thank you Smiley

SLI needs to be disabled from everything I've experienced. Since our cards are similar, give this a shot:  -H 1 -i 0 -C 1 -m 1 -l K112x2
You dont need to specify arguments for each card if you want them to run under the same settings.
newbie
Activity: 43
Merit: 0
Hi,

I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs.

Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on?

The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running.

Any ideas how it can be improved?

Thank you Smiley

Tried it with -H 0/1 and different combinations of -C 0/1/2, 32/64bit exe ? And also disabling sli, since I don't have sli/xfire I don't know that much about the problems myself, but everywhere I've read about it people seem to say it's a bad idea when mining, no idea if it's the same or not with cudaminer though.
newbie
Activity: 8
Merit: 0
Hi,

I have 2 680's, and letting Cudaminer automatically find the best settings for each card individually, it tells me 16x14 is ideal - which it appears to be. If I run one of my cards at that (either one), it'll give 200+ khs.

Trying to run both cards at the same time (with -H 2 -d 0,1 -l K16x14,K16x14) and the rate drops to around 0.7 khs - whats going on?

The only config I've found that gives anything other than either <1khs or a crash is 8x24, and that tops out at about 220khs with both cards running.

Any ideas how it can be improved?

Thank you Smiley
hero member
Activity: 756
Merit: 502
any progress on the optimizations the cloud miner promised to send us?

wanted to ask about the same ...

keep checking his blog for updates.
newbie
Activity: 53
Merit: 0
any progress on the optimizations the cloud miner promised to send us?

wanted to ask about the same ...
newbie
Activity: 3
Merit: 0
I finally got it to work. It was the stratum proxy as you said. Thx guys! Now it's time for optimization.
hero member
Activity: 526
Merit: 500
Its all about the Gold
Hi guys, I'm in a pinch atm. I keep getting this when I launch cudaminer.

HTTP reqest failed: Failed connect to 127.0.0.32; No error. Json_rpc_call failed, retry after 15 seconds.

I already tried disabling firewall but it doesn't help. I think it has to do with my batch file but I can't figure out what exactly is wrong atm.




i had same issue and finally i got it fixed by doing this on this website--sucks that you have to use a stratum proxy but it did fix my issue and keep me from having to use the memory hog cgminer.

http://www.lpshowboat0099.com/Blog/how-to-mining-ltc-with-cudaminer-on-a-stratum-server/



this should do it for you.
newbie
Activity: 3
Merit: 0
Hi guys, I'm in a pinch atm. I keep getting this when I launch cudaminer.

HTTP reqest failed: Failed connect to 127.0.0.1:9332; No error. Json_rpc_call failed, retry after 15 seconds.

I already tried disabling firewall but it doesn't help. I think it has to do with my batch file but I can't figure out what exactly is wrong atm.

http://i42.tinypic.com/o6jq4m.png
full member
Activity: 182
Merit: 100
any progress on the optimizations the cloud miner promised to send us?
newbie
Activity: 10
Merit: 0
Misread your post at first. Thanks for the tip, I'll try that next time!
full member
Activity: 196
Merit: 100
I've also noticed that if I put in an argument that cudaminer doesn't like, I have to reboot my computer since it drops my GPU clock speed to 750mhz and seems to lock it there.
Try enabling and disabling SLI in such cases. Not sure if it will help with newer nVidia cards, but works just fine for locked-after-error 480s ever since I found sometimes I manage to lock them at lower speed.
newbie
Activity: 10
Merit: 0
I've been lurking this thread ever since I started mining a week ago. Thanks for all the hard work everyone has done.

I've got 2 EVGA 670 FTW OC'ed hitting 453 kh/s. Supporting hardware is an i7 3770k @4.6ghz, EVGA Z77 FTW motherboard, and 8gb of ram.

I've tried as many combinations of arguments as possible, and these 2 are getting me the same results:

-H 1 -i 0 -C 1 -m 1 -l K14x16

-H 1 -i 0 -C 1 -m 1 -l K112x2    <- way more consistent

About 220-230 kh/s on each card.

I've also noticed that if I put in an argument that cudaminer doesn't like, I have to reboot my computer since it drops my GPU clock speed to 750mhz and seems to lock it there.

I'm hoping for some more K20 optimizations in the future!

Overclock and hash rate screenshot

hero member
Activity: 756
Merit: 502
Hi Christian, me again. dunno if im barking up the wrong tree here but if i reduce the amount of cuda threads, i.e:

'case 16: fermi_scrypt_core_kernelA<16><<< grid, threads, 0, stream >>>(d_idata); break;'

Say i set threads to  say 256 (<512). theres a massive increase in speed... But quite a few errors.

Why the errors?

You're asking basically: if I break the program there's errors. Why are there errors?

short answer: because you broke it
long answer: because you only compute half the requested results with 256 threads.
newbie
Activity: 19
Merit: 0
Hi Christian, me again. dunno if im barking up the wrong tree here but if i reduce the amount of cuda threads, i.e:

'case 16: fermi_scrypt_core_kernelA<16><<< grid, threads, 0, stream >>>(d_idata); break;'

Say i set threads to  say 256 (<512). theres a massive increase in speed... But quite a few errors.

Why the errors?

Cuda is still new to me....

http://s22.postimg.org/qdcboxcwh/Capture.jpg








newbie
Activity: 53
Merit: 0
Power settings are all ok (off or at max where it should be). Hardware acceleration that triggers in chrome is MAYBE the case, but how can I set windows to trigger GPU's full potential in console (DOS) window?
Other question is already posted: why it is still running fine even AFTER I close Chrome (after Cudaminer start)? That trigger then remains in "ON" state or what?...
My other rig with GTX 760 never does that (same nVidia driver, same Windows settings)?!

I assume this is related to the different performance levels of your Nvidia card. I got a similar issue, if a pool I'm mining for has overload, then the hashrate decreases and I get frame drops with animations in my Chrome browser because the GPU load is not high enough to switch to the next higher clock profile.
Guess this could be fixed using the tool Nvidia Inspector, it comes together with a tool named "Multi Display Power Saver", which is made to prevent running the graphics card at higher profiles than necessary when using multiple displays and therefore save energy.
But there you can configure applications that always trigger the highest available clock profile for your GPU, so you could add your miner there and the GPU should always use the full clock when your miner is running.
newbie
Activity: 14
Merit: 0
Power settings are all ok (off or at max where it should be). Hardware acceleration that triggers in chrome is MAYBE the case, but how can I set windows to trigger GPU's full potential in console (DOS) window?
Other question is already posted: why it is still running fine even AFTER I close Chrome (after Cudaminer start)? That trigger then remains in "ON" state or what?...
My other rig with GTX 760 never does that (same nVidia driver, same Windows settings)?!
member
Activity: 98
Merit: 10
I noticed the same thing just now. Rebooted machine (Win7 x64 SP1), started cudaMiner, and the speed was much worse than what it was supposed to be. Also, GPU-Z said that GPU load was only at 78%. Immediately after I started Chrome the GPU load rose to 99% and cudaMiners' speed returned to the expected level. Would be interesting to know what's causing this...

I'd double down on power saving settings, since IIRC, Chrome is hardware accelerated by default on most machines.
Jump to: