Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 958. (Read 3426921 times)

newbie
Activity: 33
Merit: 0
Got Auto-tune going.

Still alot of "Failed to valide to CPU" errors.

=/


The 18th build can get me 570KH/s stable. This latest one, when it does validate, hits about 550.

GTX670  SLI and it says "Launch COnfiguration K does not validate"
newbie
Activity: 28
Merit: 0
Why does my config that worked with the last version, not work with this version? I read the readme, but nothing stood out.

I just get failed to validate error - alot.

It's weird, and the autotune keeps assuming you're using scrypt-jane. Try starting your miner with -l K (or whatever letter your config is) and nothing else. Let autotune try that and you'll probably see good results.
newbie
Activity: 33
Merit: 0
Why does my config that worked with the last version, not work with this version? I read the readme, but nothing stood out.

I just get failed to validate error - alot.
newbie
Activity: 37
Merit: 0
Quote
Is there a difference between "Z" and "T"? I thought Z was just an alias for T.

Shouldn't be any different, I just used the same batch file that I used with the beta's
member
Activity: 69
Merit: 10
Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

What is the launch .bat command you're using for your 780s?

-i 0 -C 2 -H 1 -l T12x20

let me know if it works
member
Activity: 70
Merit: 10
What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.





I got best results with  x86 version and:
cudaminer.exe -d 0 -H 2 -C 0 -m 1 -l Z12x24 -i 0

Is there a difference between "Z" and "T"? I thought Z was just an alias for T.
newbie
Activity: 37
Merit: 0
What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.


http://s29.postimg.org/mcqa6zq7b/Untitled.png


I got best results with  x86 version and:
cudaminer.exe -d 0 -H 2 -C 0 -m 1 -l Z12x24 -i 0
newbie
Activity: 33
Merit: 0
This new oen just doesn't work for me. =/
member
Activity: 70
Merit: 10
Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.

What is the launch .bat command you're using for your 780s?
member
Activity: 70
Merit: 10
What's a good difficulty to start for a GTX 780 on LTC/DOGE? I tried -l auto but then it game a string of cudaError 30.

On the previous version, I was using:

cudaminer.exe -i 0 -C 2 -l T12x16 -o stratum+tcp://useast.middlecoin.com:3333 -O 1DVgwcCLEhFb2HRGA2PZD3rnNU6w7xNJc7:x    

While this still works on the new version, it's not using any of the new coding goodness.


newbie
Activity: 33
Merit: 0
How does one setup this new version? I keep getting it doesn't validate errors.
member
Activity: 69
Merit: 10
Had to make the bat from scratch but I did see a 540 to 605 khps on my one GTX 780 and a 475 to 505 increase on the second. Again no idea why although same cards I get different khps... The second starts at 605 and drops to 505 as does the GPU core clock and the Voltage... (that was an issue in all cudaminers so no worries) if anyone can help? Btw no overclock.
legendary
Activity: 1400
Merit: 1050
it was on windows

The problem is that my gettimeofday() does not have the best accuracy on Windows. This is why I chose to measure for 50ms minimum.

Autotune is affected by nVidia's boost feature unfortunately. I wish an application could turn it off momentarily.
What I saw using 10ms, was that the card didn't have the time to boost and the power was at 75% during all the autotuning.
Also I was able to get back some config which was totally wrong with the 50ms.
For example the config I use in script Z15x16 gives 135khash/s with the time set at 50ms,
but with the time set to 10ms is found around its true value 700khash.
But yes this is not necessarily very precise the value are overestimated, however they remain consistant with each other which is easier for the autotuning to chose the best config.
newbie
Activity: 28
Merit: 0
I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad

Try specifying -l K (or whatever letter kernal your card is.)

I saw a 20kh increase after doing so.

Autotune seems to want to pick small kernals even if your coin is scrypt, not scrypt-jane. and I was getting terrible performance.
member
Activity: 98
Merit: 10
I lost performance but it may be due to new switches that I am not understanding yet.  I am mining normal scrypt coins and usually see around 500khash not messing with timings or anything, and with this build I'm having a hard time breaking 240 Sad
newbie
Activity: 28
Merit: 0
I posted a 2014-02-02 release. I did not have a lot of time for testing. In case anything is
seriously broken, I might post some update (hotfix).

For those using the github version so far, please note the change in the kernel letters.

upper case T,K,F -> scrypt and low N-factor scrypt jane coins
lower case t,k,f   -> high N-factor scrypt-jane coins

you can still use the previous kernel names X,Y,Z if you so please.

autotune will use the lower case kernels for scrypt-jane with high N-factor automatically.
However the threshold may not be chosen optimally. So please experiment and override
the autotune to find which one is actually better.

Note that the upper case letters T and K now select the high register count kernels
submitted by nVidia. This matters if you used T or K kernel configs in your Yacoin mining
scripts so far -> switch to t and k.

Mining through N-factor changes should not lead to crashes or validation errors now, but
the speed might not be optimal after the change. Best to re-tune the kernel afterwards.

Christian


Could you elaborate on this? When mining dogecoin, it appears autotune wants to select lower-case kernals as if it were a scrypt-jane coin. Is it expected to have to put -l K to override it for scrypt now?
member
Activity: 70
Merit: 10
GTX Titan

T kernel is slightly more overclock friendly but at same clocks, no changes.

t kernel is worse now, same config from before an after I've lost around 0.6 kh/s @ Nfactor=14, Still quite a few issue with Autotune in Windows so this may just need a different version of the tuning but it's missing a bit Sad

k kernel interestingly enough I can allocate k128x1 with -L 1 and it fully allocates out nearly 5GB so not sure what it is doing to allow me past the 3GB limit but it can be done, that said performance is VERY bad compared to even the reduced t kernel.
full member
Activity: 812
Merit: 102
Interesting. I'll have to try out this latest "official" version...

Cbuchner1, your work on this is astounding. You are a real developer! Great job!
full member
Activity: 182
Merit: 100
I did not touch that part of code today. Only kernel selection was modified. And the Fermi kernel got lookup gap support.
Seems to happen if I do not specify any -l parameter, if I specify "-l t" it does take my "-L 4" but it prints "[2014-02-03 01:22:21] GPU #0: Given launch config 't' does not validate."

Edit:
Happens with: -L 4 -m 1 -i 1 --algo=scrypt-jane:YAC -o http://url:port -O acc:pass
hero member
Activity: 756
Merit: 502
I noticed that the autotune does not take -L into consideration anymore, am I correct?

I did not touch that part of code today. Only kernel selection was modified. And the Fermi kernel got lookup gap support.
Jump to: