Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 996. (Read 3426921 times)

full member
Activity: 140
Merit: 100
Wanted to report my results using the latest git version of cudaminer against vertcoin. With my gtx670 I am averaging around 126khps.  Using a 64bit version does not offer any improvement on my end.
full member
Activity: 182
Merit: 100
I have been solo mining YACoin all day with the latest client and -l 128x2 -b 1024 -L 4 -i 1 --algo=scrypt-jane at 4khash/s and I haven't found a single block, bad luck or something wrong?
member
Activity: 193
Merit: 10
I'm a bit lost there  Cheesy

What do you all mean by lastest release? Do you mean you all got the source code and compiled it?
Is there anyone with a recent Windows build?

The last readme.txt does not seem to talk about scrypt jane but seems like it is tested by people in there?


Thanks in advance

newbie
Activity: 25
Merit: 0
Hi,

is there an backup pool option planned?

if not --> feature request Smiley

thanks for your hard work.

fr00p
sr. member
Activity: 292
Merit: 250
Hi Christian, I recently just git clone the source and compiled in ubuntu 13.10 x64. My gpu is a GTX 570. Everything compiled fine but running to mine litecoin (scrypt) I got this error:

GPU #0: GeForce GTX 570 result does not validate on CPU (i=5456, s=0)!

Below is the execution config.
./cudaminer -a scrypt -o stratum+tcp://hk2.wemineltc.com:3333 -u -p -l F15x16 -C 1 -m 1 -H 2 -i 0

Is it a bug or some wrong config?
newbie
Activity: 23
Merit: 0
ok, i have had a little success patorbeli Smiley now i got it to working with -a scrypt-jane -i 0 -H 0 -m 0 -C 0 -l F2x3 -o , im only getting 0.37/0.40 on my gtx 560 ti and the computer becomes impossible to use, but its better than nothing.
Still attempting other setting, but most crash the driver :s

I have 2x GTX 560 ti SC 1GB and the highest I could get my cards to run was around .5ish (Yacoin) so your in the ballpark. Try config 7x1 and you might be able to get a bit of a boost.
newbie
Activity: 11
Merit: 0
Im at -a scrypt-jane -i 1 -l X2x3 -o name.x:x -C 1 -b 4096 at 0.53 khash clearly improving, the -L 2 seems to not let the program run.

Thanks all for your inputs, gonna keep testing the parameters Smiley
full member
Activity: 120
Merit: 100
Astrophotographer and Ham Radioist!
ok, i have had a little success patorbeli Smiley now i got it to working with -a scrypt-jane -i 0 -H 0 -m 0 -C 0 -l F2x3 -o , im only getting 0.37/0.40 on my gtx 560 ti and the computer becomes impossible to use, but its better than nothing.
Still attempting other setting, but most crash the driver :s

I am not sure if the experimental kernel will work on Fermi but try -i 1 and -l X2x3 with additional -L parameter to modify your lookup gap? -H 2 and -C 1 works best for my 560 Ti (non 448 core edition) with scrypt for maximum performance. If 175 khashes could be called maximum. Hope I helped you out a little bit! Benchmarking on any build and in any mode still makes my computer horribly unruly and lagging. Older builds didn't display that. The benchmark as well takes ages to complete.

Christian, are you communicating with your Nvidia Friend about CUDA 6? Will it give any performance enhancements for our old Fermi cards?
newbie
Activity: 59
Merit: 0
member
Activity: 106
Merit: 10
This is my start configuration. Please note that I use it as a desktop too and thus -i 1.
cudaminer.exe -a scrypt-jane -i 1 -l X51x2 -o http://yac.coinmine.pl:8882 -O pato.2:password -C 1 -b 4096 -L 2

This is for a card with 2 GB of ram. If yours only has one, change to -l X24x2 for example. Maybe you need to lower it more, didn't test/calculate exactly.
Otherwise run it like:
cudaminer.exe -a scrypt-jane -i 1 -l X -o http://yac.coinmine.pl:8882 -O pato.2:password -C 1 -b 4096 -L 2 -D
To benchmark it and get the debug output with all the results.
newbie
Activity: 11
Merit: 0
ok, i have had a little success patorbeli Smiley now i got it to working with -a scrypt-jane -i 0 -H 0 -m 0 -C 0 -l F2x3 -o , im only getting 0.37/0.40 on my gtx 560 ti and the computer becomes impossible to use, but its better than nothing.
Still attempting other setting, but most crash the driver :s
ktf
newbie
Activity: 24
Merit: 0
I disabled SLI on mine, it was behaving weird with it. I get roughly ~2.6/2.7 per GTX 660 card now. I can't specify a decent -l number though, it errors out every time I do it. It only works with -L 2.
full member
Activity: 182
Merit: 100
Disable SLI, you don't need it for mining.
Correction, it doesn't improve mining.
member
Activity: 106
Merit: 10
Disable SLI, you don't need it for mining.
newbie
Activity: 4
Merit: 0
Having some difficulty with 2 GTX 660s. I've tried with SLI turned on and off. Once the auto tune starts, the driver crashes.

This was running fine with one GTX 660, and I just added a second card. Not sure what's causing the issue.

Here's the commands I'm using- cudaminer.exe -H 1 -d 0,1 -i 1 -o stratum+tcp://usa.wemineltc.com:80 -O username.1:pass

I've tried with the -d command and without. Same result either way. Driver crash.

Appreciate any help. Thanks.

Try...

-d 0,1 -i 1,0 -l K5x32,K5x32 -H 1


Just tried these instructions and no luck. Caused Cudaminer to give a Windows error - stopped working. Didn't get a driver crash though... Any idea if SLI should be turned on or off?
newbie
Activity: 4
Merit: 0
Having some difficulty with 2 GTX 660s. I've tried with SLI turned on and off. Once the auto tune starts, the driver crashes.

This was running fine with one GTX 660, and I just added a second card. Not sure what's causing the issue.

Here's the commands I'm using- cudaminer.exe -H 1 -d 0,1 -i 1 -o stratum+tcp://usa.wemineltc.com:80 -O username.1:pass

I've tried with the -d command and without. Same result either way. Driver crash.

Appreciate any help. Thanks.

Try...

-d 0,1 -i 1,0 -l K5x32,K5x32 -H 1


Thanks for the help - any suggestion how SLI should be set on or off?

I tried a few different settings in the commands, and now I'm getting a memory error once the autotune starts. If I skip the auto tune, the driver crashes. Strange this didn't happen with one GTX660.
member
Activity: 106
Merit: 10
FYI for the YaCoin miners, there is a new wallet out, version 0.4.2. Fixes a possible POS/POW attack vector, download:
http://yacointalk.com/index.php/topic,582.0.html
legendary
Activity: 1400
Merit: 1050
Hi,

I modified the cudaminer version which does not include the lookup_gap in script (the fastest one so far), in order to run on vertcoin and the gtx780ti is able to achieve (heavily OCed...) around 310khash/s (yesterday code only 265 same OC)

Something strange though, in the autotuning it says a max warp of 1359, however the best config is obtained around 480 warps(15x32/ 120x4), while in normal script it makes use of all the warps 90x30.
(assuming I understood that part...)
member
Activity: 106
Merit: 10
msvcp110d.dll, msvcr110d.dll,

these are DLLs from the debug runtime. You're not supposed (allowed) to ship them to end users. Regular "Release" binaries would not require them anyways.

Christian


I actually took them from the redistributable (see previous post) and just put them into my directory. It might be because I don't compile it on the same machine as I run it. But I always switch to Release version in VS2012 before I compile.
[edit]
Just tested to run it without those files and it does Smiley I guess in one of my earliest compile attempts I didn't switch to release and then added those files to be actually able to run it. Will not add them in the future anymore.
member
Activity: 106
Merit: 10
Running it here under Windows 7 X64.
Maybe you need to install latest Visual C++ Redistributable for Visual Studio 2012  http://www.microsoft.com/en-us/download/details.aspx?id=30679 to get it to work (install both versions, x32 and x64). Not sure though as I have them installed on my machine. Or otherwise wait until cbuchner releases a new beta version. Not sure what else you might be missing.
Jump to: