Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 959. (Read 3426921 times)

full member
Activity: 182
Merit: 100
I noticed that the autotune does not take -L into consideration anymore, am I correct?
hero member
Activity: 756
Merit: 502
it was on windows

The problem is that my gettimeofday() does not have the best accuracy on Windows. This is why I chose to measure for 50ms minimum.

Autotune is affected by nVidia's boost feature unfortunately. I wish an application could turn it off momentarily.
hero member
Activity: 756
Merit: 502
I posted a 2014-02-02 release. I did not have a lot of time for testing. In case anything is
seriously broken, I might post some update (hotfix).

For those using the github version so far, please note the change in the kernel letters.

upper case T,K,F -> scrypt and low N-factor scrypt jane coins
lower case t,k,f   -> high N-factor scrypt-jane coins

you can still use the previous kernel names X,Y,Z if you so please.

autotune will use the lower case kernels for scrypt-jane with high N-factor automatically.
However the threshold may not be chosen optimally. So please experiment and override
the autotune to find which one is actually better.

Note that the upper case letters T and K now select the high register count kernels
submitted by nVidia. This matters if you used T or K kernel configs in your Yacoin mining
scripts so far -> switch to t and k.

Mining through N-factor changes should not lead to crashes or validation errors now, but
the speed might not be optimal after the change. Best to re-tune the kernel afterwards.

Christian
hero member
Activity: 756
Merit: 502
15 mio. here Wink

Are you solominig or on a pool, to get this result? 

pool. No luck solo'ing this coin at all.
hero member
Activity: 676
Merit: 500

Edit: I'm up there in the VIP section with cbuchner1 now:   Cool

Congratulations!

15 mio. here Wink

Are you solominig or on a pool, to get this result? 
member
Activity: 98
Merit: 10
With build 114, using the 12/18 dll's, I get lower performance to the tune of a couple hundred khash.

Waiting for release Wink
hero member
Activity: 756
Merit: 502
hi sorry to but in , but im new to mining! ive got a gtx650 that im trying to mine with i think its working on guiminer but i wanted to try cudaminer my problem is in the cmd interface i type in cudaminer.exe -0 http://127.0.0.1:8332 -u madmick.1 -p x  but i get a message saying something about the worker name and code -1!  can you help?


the IP address 127.0.0.1 is useable only for solo mining, unless you run your own pool and web server on your home PC...

the first option should be passed with lower case o

-o http://127.0.0.1:8332

you would have to use the options rpcport=8332 and rpcuser=madmick.1 and rpcpassword=x and server=1 in the wallet's .conf file to use these settings on Solo mining. This password is too weak and unsafe though.
newbie
Activity: 1
Merit: 0
hi sorry to but in , but im new to mining! ive got a gtx650 that im trying to mine with i think its working on guiminer but i wanted to try cudaminer my problem is in the cmd interface i type in cudaminer.exe -0 http://127.0.0.1:8332 -u madmick.1 -p x  but i get a message saying something about the worker name and code -1!  can you help?
newbie
Activity: 27
Merit: 0
got it up to 450  now Smiley

On the December release I get 630 on my EVGA superclocked 780Ti's.
legendary
Activity: 1400
Merit: 1050
hero member
Activity: 756
Merit: 502
I was playing a bit with the autotune code and I think I found what cause all the strange results we might get in script.
It seems to be due to the time over which the average khash/s is calculated (50ms).
At first, I increased it to 500ms and I saw that the power was gradually increasing between 75% and 120% (the limit I choose in overclocking) for each line and at the next line of the config table was starting again at 75% up to 120% for 32
This means that it was just impossible to compare each numbers.

So then, I decreased the time that time to 0.01 and now everything is kept at more or less the power level (there are still a few spike). However I am now able to get meaningfull number (or least they can be compared between each other) and the autotune seems to be more reliable (and 5x time faster)

was this on Windows or on Linux?
legendary
Activity: 1400
Merit: 1050
I was playing a bit with the autotune code and I think I found what cause all the strange results we might get in script.
It seems to be due to the time over which the average khash/s is calculated (50ms).
At first, I increased it to 500ms and I saw that the power was gradually increasing between 75% and 120% (the limit I choose in overclocking) for each line and at the next line of the config table was starting again at 75% up to 120% for 32
This means that it was just impossible to compare each numbers.

So then, I decreased the time that time to 10ms and now everything is kept at more or less the power level (there are still a few spike). However I am now able to get meaningfull number (or least they can be compared between each other) and the autotune seems to be more reliable (and 5x time faster)
member
Activity: 69
Merit: 10
Any idea why the gpu clock and the V drop when I use 2 cards at the same time while when i use them one by one they stay the same?
sr. member
Activity: 350
Merit: 250
hero member
Activity: 756
Merit: 502

Edit: I'm up there in the VIP section with cbuchner1 now:   Cool

Congratulations!

15 mio. here Wink
full member
Activity: 167
Merit: 100
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
@tacojohn: Known issue, talked about a few pages back. Blame CUDA 5.5 and just run 1 instance per card.

is 115 khash on nfactor 9, 780TI ok?

If you don't mind being half as fast as a 660  Cheesy


Edit: I'm up there in the VIP section with cbuchner1 now:   Cool
full member
Activity: 167
Merit: 100
is 115 khash on nfactor 9, 780TI ok?
newbie
Activity: 33
Merit: 0
Can anyone help me?

Windows 7 64-bit.

GTX670 SLI.

X86 works fine, but X64 binary crashes my drivers.

I get 570KH/s with this configuration, but...the page shows someone with the same 670's as me(FTW EDITIONS), getting 648, even though I am using the same settings, and a higher overclock.




#7 on the list

7   2014-01-03   GeForce GTX 670 x2 (SLI)   EVGA FTW   2x2048   Windows 7 Ultimate   2013-12-18   32 bit (x86)   648.00      331.93   +100   +320   140      99   100   70   340   1.36      K7x32   -i 0   -C 1   -m 1   -H 1   Ness   SLI, each card OC'ed individually
newbie
Activity: 1
Merit: 0
I have 4 GPUs plugged into one system.  When I run one process per GPU, I get very stable results.  When I let cudaminer spawn threads to simultaneously drive 2 or more GPUs, I get sporadic cpu validation errors and about a 50% rejection rate from the pool.

Anybody else seeing this generally?  

I'm running the head revision, 64-bit linux.  
Jump to: