Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1115. (Read 3426996 times)

sr. member
Activity: 280
Merit: 250
Just wanted to say thanks for your hard work on this project, expect a tip.

Here's what i got on a GTX460 1GB, Windows 7 SP1 x64, nvidia 314.22
command line: -C 2 -i 0 -l 28x4
C1 gave me alittle less but C2 gave me roughly 2% increase
first is card stock settings
core  |  mem   |   khash
715   |  1800   |   102.98 khash/s
800   |  1800   |   112.26 khash/s
850   |  1800   |   120.95 khash/s
900   |  1800   |   128.65 khash/s

you could almost say i get 1 khash/s per 10 mhz on the core.
using 1 or 2 screens, turning on and off AREO have little to no affect on the overall performance, also works well on feathercoins if that's your thing, I did build for linux under gentoo 64bit but it had some issues with lower speed and making the nVidia driver go bananas on me.

edit: for the giggles i dropped my core too 405 i get 60 khash/s but upping the memory seems to do nothing worth mentioning
newbie
Activity: 9
Merit: 0
Am running it on my laptop since this afternoon. (Which has a 1GB GeForce 520M)
It more then doubled my hashrate compared to CGMiner.

CGMiner hashrate: ~5KH/s
CUDAMiner hashrate: ~11KH/s

Thanks!
newbie
Activity: 28
Merit: 0
OK, I'm up to 267-270 kH/s barely stable on a GTX 570 overclocked to 900/1800 MHz (Core/Shader) 2150 MHz memory and overvolted to 1.05 v . It was one of those über-expensive Calibre 570s that Sparkle made for a while, though:

http://www.seedboxlist.com/uploads/posts/2011-06/dlybc9lr101.jpeg

4/17 version of cudaminer with the following switches of relevance: -d 0 -i 1 -l 30x8 -C 1

Temperature: 71 C at fan speed of 52%.

Ambient room temperature: 73.9 F.

Also, overclocked a 670 to 164 kH/s levels. The 600 series cards are much more diffuclt to control for overclocking purposes, because the voltage is tied to the frequency it seems. Here is a pic of the Gigabyte GTX 670 Windforce 2X I am using:

http://www.techpowerup.com/gpudb/images/b832.jpg

Both cards are running pretty quiet, unlike the AMD 5970 that I have on order will be (i.e., a basement only solution).
hero member
Activity: 756
Merit: 502
Little patch to 2013.04.17 version for compiling & running cudaminer on native 64bit linux.

http://mk.junkyard.one.pl/cudaminer-2013.04.17-64bit.patch.gz


Very cool! I will review the patch, and possibly make it part of the next release version.
newbie
Activity: 58
Merit: 0
Hi,

i just saw there is a new release 04-17 but i had some troubles since it didn't compile at all.
I am on Sabayon 11 64bit with a GTS450 and nvidia drivers 313.18 and Nvidia Cuda Toolkit 5.0.35-r3

i had already troubles with the release before 04-09 and libcurl issues, where i added -fpermissive thanks to gchil0 but i also had to change the configure.sh to
./configure --build=i686-pc-linux-gnu "CFLAGS=-m64 -O3" "CXXFLAGS=-m64 -O3 -fpermissive" "LDFLAGS=-m64" --with-cuda=/usr/local/cuda
no idea if did make any sense, but at least i got a binary Cheesy

how ever i did try the same with 04-17 but it doesn't compile anymore it ends with
Code:
82 errors detected in the compilation of "/tmp/tmpxft_0000419b_00000000-6_salsa_kernel.cpp1.ii".
make[2]: *** [salsa_kernel.o] Error 2
make[2]: Leaving directory `/home/miner/cudaminer-2013-04-17/cudaminer-src-2013.04.17'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/miner/cudaminer-2013-04-17/cudaminer-src-2013.04.17'
make: *** [all] Error 2

i did apply the patch posted by Misiolap, after that i got a binary and its running

04-09
GPU #0: GeForce GTS 450, 421888 hashes, 47.56 khash/s

04-17
GPU #0: GeForce GTS 450, 1781760 hashes, 72.47 khash/s


yippi Smiley
newbie
Activity: 14
Merit: 0
Little patch to 2013.04.17 version for compiling & running cudaminer on native 64bit linux.

http://mk.junkyard.one.pl/cudaminer-2013.04.17-64bit.patch.gz

To apply to cudaMiner source download the file & issue command:
Code:
zcat cudaminer-2013.04.17-64bit.patch.gz | patch -p1

I only tested the salsa kernel, and i'm not sure if titan kernel will work.
member
Activity: 69
Merit: 10
I thought maybe it was the hotel I was at...but nope this still still happening every few seconds:

[2013-04-20 00:10:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.20 khash/s
[2013-04-20 00:11:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.11 khash/s
[2013-04-20 00:11:34] JSON-RPC call failed: {
   "message": "Unexpected error during authorization",
   "code": -1
}
[2013-04-20 00:11:34] json_rpc_call failed, retry after 15 seconds
[2013-04-20 00:11:37] LONGPOLL detected new block
[2013-04-20 00:11:50] GPU #0: GeForce GT 650M, 8320 hashes, 30.21 khash/s

It does that then resumes every so often.

I get this also, but only when using a stratum proxy.
member
Activity: 69
Merit: 10
Wanted to just bump the google doc being used to track card performance with cudaMiner, looks like a few people posted some stats in the last few pages that haven't made it on here.

https://docs.google.com/spreadsheet/ccc?key=0AjMqJzI7_dCvdG9fZFN1Vjd0WkFOZmtlejltd0JXbmc&usp=sharing

Thank you to those who have contributed to it.

And another huge thank you to Christian for his continued work!

Good bump, and thanks for this doc, it's more useful than the github litecoin mining hardware comparison.

Also thanks again to Christian! Donation sent, I don't have much but what I have is in no small part thanks to your hard work.
sr. member
Activity: 247
Merit: 250


I thought maybe it was the hotel I was at...but nope this still still happening every few seconds:

[2013-04-20 00:10:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.20 khash/s
[2013-04-20 00:11:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.11 khash/s
[2013-04-20 00:11:34] JSON-RPC call failed: {
   "message": "Unexpected error during authorization",
   "code": -1
}
[2013-04-20 00:11:34] json_rpc_call failed, retry after 15 seconds
[2013-04-20 00:11:37] LONGPOLL detected new block
[2013-04-20 00:11:50] GPU #0: GeForce GT 650M, 8320 hashes, 30.21 khash/s

It does that then resumes every so often.

Seems like a pool issue to me?

My other machine on the same pool isn't having an issues. this only happened with the newest software update

full member
Activity: 126
Merit: 100


I thought maybe it was the hotel I was at...but nope this still still happening every few seconds:

[2013-04-20 00:10:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.20 khash/s
[2013-04-20 00:11:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.11 khash/s
[2013-04-20 00:11:34] JSON-RPC call failed: {
   "message": "Unexpected error during authorization",
   "code": -1
}
[2013-04-20 00:11:34] json_rpc_call failed, retry after 15 seconds
[2013-04-20 00:11:37] LONGPOLL detected new block
[2013-04-20 00:11:50] GPU #0: GeForce GT 650M, 8320 hashes, 30.21 khash/s

It does that then resumes every so often.

Seems like a pool issue to me?
sr. member
Activity: 247
Merit: 250


I thought maybe it was the hotel I was at...but nope this still still happening every few seconds:

[2013-04-20 00:10:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.20 khash/s
[2013-04-20 00:11:34] GPU #0: GeForce GT 650M, 2113280 hashes, 35.11 khash/s
[2013-04-20 00:11:34] JSON-RPC call failed: {
   "message": "Unexpected error during authorization",
   "code": -1
}
[2013-04-20 00:11:34] json_rpc_call failed, retry after 15 seconds
[2013-04-20 00:11:37] LONGPOLL detected new block
[2013-04-20 00:11:50] GPU #0: GeForce GT 650M, 8320 hashes, 30.21 khash/s

It does that then resumes every so often.
full member
Activity: 168
Merit: 100
GPU, Config, 2013-04-17 build:

cudaminer -d 0 -l 30x8 -C 2 -m 1 ...

[2013-04-19 23:10:55] GPU #0: GeForce GTX 570 with compute capability 2.0
[2013-04-19 23:10:55] GPU #0: interactive: 1, tex-cache: 2D, single-alloc: 1
[2013-04-19 23:10:55] GPU #0: using launch configuration  30x8
[2013-04-19 23:10:57] GPU #0: GeForce GTX 570, 384000 hashes, 216.08 khash/s

cudaminer -d 1 -l 30x8 -C 2 -m 1 ...

[2013-04-19 23:10:55] GPU #0: GeForce GTX 570 with compute capability 2.0
[2013-04-19 23:10:55] GPU #0: interactive: 1, tex-cache: 2D, single-alloc: 1
[2013-04-19 23:10:55] GPU #0: using launch configuration  30x8
[2013-04-19 23:10:57] GPU #0: GeForce GTX 570, 384000 hashes, 216.08 khash/s

When doing autotune, the block shows as high as 229.5, but in practice it hovers around 210-220 each.  SLI disabled, Win7 64bit, plenty of RAM.

Does weird things if I try to use both devices at once in a single batch file, so I have to launch them separately on different command lines.  Works great then.

edit:  It does glitch on occassion though: 

[2013-04-19 23:16:51] GPU #0: GeForce GTX 570 with compute capability 2.0
[2013-04-19 23:16:51] GPU #0: interactive: 1, tex-cache: 2D, single-alloc: 1
[2013-04-19 23:16:51] GPU #0: using launch configuration  30x8
[2013-04-19 23:16:51] GPU #0: GeForce GTX 570, 7680 hashes, 54.46 khash/s
[2013-04-19 23:16:53] LONGPOLL detected new block
[2013-04-19 23:16:53] GPU #0: GeForce GTX 570, 1935360 hashes, 777.52 khash/s
[2013-04-19 23:16:58] LONGPOLL detected new block
[2013-04-19 23:16:58] GPU #0: GeForce GTX 570, 3356160 hashes, 689.39 khash/s
[2013-04-19 23:17:22] LONGPOLL detected new block
[2013-04-19 23:17:22] GPU #0: GeForce GTX 570, 16558080 hashes, 704.44 khash/s
[2013-04-19 23:17:29] LONGPOLL detected new block
[2013-04-19 23:17:29] GPU #0: GeForce GTX 570, 5337600 hashes, 703.48 khash/s
[2013-04-19 23:17:34] GPU #0: GeForce GTX 570 result does not validate on CPU!
[2013-04-19 23:17:37] GPU #0: GeForce GTX 570 result does not validate on CPU!
[2013-04-19 23:17:46] LONGPOLL detected new block
[2013-04-19 23:17:46] GPU #0: GeForce GTX 570, 11781120 hashes, 719.37 khash/s
[2013-04-19 23:17:50] LONGPOLL detected new block
[2013-04-19 23:17:50] GPU #0: GeForce GTX 570, 3432960 hashes, 793.52 khash/s
[2013-04-19 23:17:55] LONGPOLL detected new block
[2013-04-19 23:17:55] GPU #0: GeForce GTX 570, 3609600 hashes, 795.02 khash/s

I know it's not getting 700k, so...

newbie
Activity: 16
Merit: 0
My latest with a 680 and the new version

[2013-04-19 23:03:42] GPU #0:  196.91 khash/s with configuration  40x4
[2013-04-19 23:03:42] GPU #0: using launch configuration  40x4
[2013-04-19 23:03:42] GPU #0: GeForce GTX 680, 5120 hashes, 0.06 khash/s

using -C 1 and -i 0
newbie
Activity: 8
Merit: 0
Wanted to just bump the google doc being used to track card performance with cudaMiner, looks like a few people posted some stats in the last few pages that haven't made it on here.

https://docs.google.com/spreadsheet/ccc?key=0AjMqJzI7_dCvdG9fZFN1Vjd0WkFOZmtlejltd0JXbmc&usp=sharing

Thank you to those who have contributed to it.

And another huge thank you to Christian for his continued work!
full member
Activity: 126
Merit: 100
Everyone glued to the television it seems, watching news about Boston, MIT and pressure cooker bombs.


I'm not because of all the false information they've been through around. I figured I would just wait until it is all over with and the official report is released.
+1
sr. member
Activity: 247
Merit: 250
Everyone glued to the television it seems, watching news about Boston, MIT and pressure cooker bombs.


I'm not because of all the false information they've been through around. I figured I would just wait until it is all over with and the official report is released.
hero member
Activity: 756
Merit: 502
Everyone glued to the television it seems, watching news about Boston, MIT and pressure cooker bombs.
newbie
Activity: 47
Merit: 0
I'm sure you've seen this and no idea if there is any helpful information from BTC that can help LTC there but i figured i'd post it because this forum is pretty big and it's easy to miss things.
newbie
Activity: 28
Merit: 0
you should use -i 0 for best performance

Thanks for the tip, I go from 248 to 256 kH/s with -i 1 changed to -i 0 on my 570.

My son's 670 went from 142 to 156 with -i 0 instead of -i 1.

Just when I thought that there was nothing left to tweak in the parameter department, hats off to you again,

Michael
hero member
Activity: 756
Merit: 502
Well, I did a little A-B testing with pretty much every known option.

I really think you should dig out an XP CD and give it a try yourself - just have to install the nVidia drivers and the Visual C++ 2010 runtime, then launch it and see for yourself. Might just find a huge speed boost Smiley

Hmm, looks like old nVidia cards have serious issues with the WDDM driver model.
Jump to: