Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1002. (Read 3426921 times)

newbie
Activity: 13
Merit: 0
Finally got the newest build compiled and running. Thx to everybody helping me on it.


+1 for autotune crashing.


But now I could finally play around with my settings:
Best Hashrates coming from any common denominator (if that means "gemeinesamer nennern" Cheesy) of 32.
But I dont see any changes in using 8x4 -> 16X2 -> 32x1 -> 4x8 etc... is that supposed to be like that? beeing "exactly" (+-0.01) the same on all?
Also any changes in the direction of a total 33 or 31 gives me a drop of about 20%.


And since i just saw this:
 
I managed to compile mine on x86 and ran it on my two GTX660 using :

cudaminer.exe  --algo=scrypt-jane -d 0  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd
cudaminer.exe  --algo=scrypt-jane -d 1  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd

but I am getting only :

[2014-01-19 17:07:10] GPU #1:    2.14 khash/s with configuration K19x3
[2014-01-19 17:07:10] GPU #1: using launch configuration K19x3
[2014-01-19 17:07:11] GPU #1: GeForce GTX 660, 0.80 khash/s
[2014-01-19 17:07:22] GPU #1: GeForce GTX 660, 1.53 khash/s
[2014-01-19 17:07:22] accepted: 1/1 (100.00%), 1.53 khash/s (yay!!!)
[2014-01-19 17:07:58] Stratum detected new block
[2014-01-19 17:07:59] GPU #1: GeForce GTX 660, 1.59 khash/s

and

[2014-01-19 17:08:14] GPU #0: GeForce GTX 660, 1.74 khash/s
[2014-01-19 17:08:14] accepted: 6/6 (100.00%), 1.74 khash/s (yay!!!)
[2014-01-19 17:08:18] GPU #0: GeForce GTX 660, 1.71 khash/s
[2014-01-19 17:08:18] accepted: 7/7 (100.00%), 1.71 khash/s (yay!!!)
[2014-01-19 17:08:22] GPU #0: GeForce GTX 660, 1.69 khash/s
[2014-01-19 17:08:22] accepted: 8/8 (100.00%), 1.69 khash/s (yay!!!)
[2014-01-19 17:08:28] GPU #0: GeForce GTX 660, 1.72 khash/s
[2014-01-19 17:08:28] accepted: 9/9 (100.00%), 1.72 khash/s (yay!!!)
[2014-01-19 17:08:35] GPU #0: GeForce GTX 660, 1.75 khash/s
[2014-01-19 17:08:35] accepted: 10/10 (100.00%), 1.75 khash/s (yay!!!)

Any recommended -l setting for GTX 660 ?

Thank you Smiley
Is it just a lucky streak getting a share accepted every few seconds? For me it sometime takes several minutes. I do farm on another pool thou (yac.m-s-t.org). Or is that due to the pool settings? I never really read into all those stratum tcp pps pplns etc stuff...
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
I managed to compile mine on x86 and ran it on my two GTX660 using :

Scrypt-jane is suffering a lot on x86.
hero member
Activity: 756
Merit: 502
One card is running with K59x1, the other one with K29x2, both set by autotune. 1.7 / 1.6 seems a bit low though.

pass one of -L 2 or -L 3 and autotune again. It will take longer though. Try it on the first card only, and when you find a good setting, use the same setting for the other card too.

also pass -i 0 and -b 8192 for a "production run" with found settings. The default is -i 1 and -b 1024, which is very interactive when working with the GPU display, but loses about a third of performance.
ktf
newbie
Activity: 24
Merit: 0
One card is running with K59x1, the other one with K29x2, both set by autotune. 1.7 / 1.6 seems a bit low though.
full member
Activity: 182
Merit: 100
I managed to compile mine on x86 and ran it on my two GTX660 using :

cudaminer.exe  --algo=scrypt-jane -d 0  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd
cudaminer.exe  --algo=scrypt-jane -d 1  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd

but I am getting only :

[2014-01-19 17:07:10] GPU #1:    2.14 khash/s with configuration K19x3
[2014-01-19 17:07:10] GPU #1: using launch configuration K19x3
[2014-01-19 17:07:11] GPU #1: GeForce GTX 660, 0.80 khash/s
[2014-01-19 17:07:22] GPU #1: GeForce GTX 660, 1.53 khash/s
[2014-01-19 17:07:22] accepted: 1/1 (100.00%), 1.53 khash/s (yay!!!)
[2014-01-19 17:07:58] Stratum detected new block
[2014-01-19 17:07:59] GPU #1: GeForce GTX 660, 1.59 khash/s

and

[2014-01-19 17:08:14] GPU #0: GeForce GTX 660, 1.74 khash/s
[2014-01-19 17:08:14] accepted: 6/6 (100.00%), 1.74 khash/s (yay!!!)
[2014-01-19 17:08:18] GPU #0: GeForce GTX 660, 1.71 khash/s
[2014-01-19 17:08:18] accepted: 7/7 (100.00%), 1.71 khash/s (yay!!!)
[2014-01-19 17:08:22] GPU #0: GeForce GTX 660, 1.69 khash/s
[2014-01-19 17:08:22] accepted: 8/8 (100.00%), 1.69 khash/s (yay!!!)
[2014-01-19 17:08:28] GPU #0: GeForce GTX 660, 1.72 khash/s
[2014-01-19 17:08:28] accepted: 9/9 (100.00%), 1.72 khash/s (yay!!!)
[2014-01-19 17:08:35] GPU #0: GeForce GTX 660, 1.75 khash/s
[2014-01-19 17:08:35] accepted: 10/10 (100.00%), 1.75 khash/s (yay!!!)

Any recommended -l setting for GTX 660 ?

Thank you Smiley
There's -l and -L, both do something else, as for -l use whatever autotune gets the most out of.
ktf
newbie
Activity: 24
Merit: 0
I managed to compile mine on x86 and ran it on my two GTX660 using :

cudaminer.exe  --algo=scrypt-jane -d 0  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd
cudaminer.exe  --algo=scrypt-jane -d 1  -H 0 -o stratum+tcp://yac.coinmine.pl:9088 -O user:pwd

but I am getting only :

[2014-01-19 17:07:10] GPU #1:    2.14 khash/s with configuration K19x3
[2014-01-19 17:07:10] GPU #1: using launch configuration K19x3
[2014-01-19 17:07:11] GPU #1: GeForce GTX 660, 0.80 khash/s
[2014-01-19 17:07:22] GPU #1: GeForce GTX 660, 1.53 khash/s
[2014-01-19 17:07:22] accepted: 1/1 (100.00%), 1.53 khash/s (yay!!!)
[2014-01-19 17:07:58] Stratum detected new block
[2014-01-19 17:07:59] GPU #1: GeForce GTX 660, 1.59 khash/s

and

[2014-01-19 17:08:14] GPU #0: GeForce GTX 660, 1.74 khash/s
[2014-01-19 17:08:14] accepted: 6/6 (100.00%), 1.74 khash/s (yay!!!)
[2014-01-19 17:08:18] GPU #0: GeForce GTX 660, 1.71 khash/s
[2014-01-19 17:08:18] accepted: 7/7 (100.00%), 1.71 khash/s (yay!!!)
[2014-01-19 17:08:22] GPU #0: GeForce GTX 660, 1.69 khash/s
[2014-01-19 17:08:22] accepted: 8/8 (100.00%), 1.69 khash/s (yay!!!)
[2014-01-19 17:08:28] GPU #0: GeForce GTX 660, 1.72 khash/s
[2014-01-19 17:08:28] accepted: 9/9 (100.00%), 1.72 khash/s (yay!!!)
[2014-01-19 17:08:35] GPU #0: GeForce GTX 660, 1.75 khash/s
[2014-01-19 17:08:35] accepted: 10/10 (100.00%), 1.75 khash/s (yay!!!)

Any recommended -l setting for GTX 660 ?

Thank you Smiley
full member
Activity: 182
Merit: 100

I also see the Windows version crashing during autotune now. I will investigate.

Christian


Yes, I crashed during autotune with -L 3 on my Asus GTX 780 OC
Autotune with -L 2 gave me T64x2 with  3.14-3.18 khash.
hero member
Activity: 756
Merit: 502
I also see the Windows version crashing during autotune now. I will investigate.

And I find it quite crazy how long the autotune runs take for L = 4, 5, 6...

I am thinking to add a new syntax where you can tell autotune which range of blocks and warps
to scan. like -l T15-30x16-32 to scan just the square between 15 and 30 blocks and 16-32 warps.

Or maybe a possibility to scan all  launch configs that use anywhere between 500 and 600 warps
in total, with the limits being user configurable

Christian

hero member
Activity: 756
Merit: 502
How far can we go in the L value ? I am autotuning right now at L=6 and it seems to continue to increase hasrate (5.17 gtx780ti, no error so far)

The limit is actually N, making the memory requirement 128 bytes per hash. But you don't want to go there ;.)

Christian
ktf
newbie
Activity: 24
Merit: 0
That version crashes for me. I compiled my own as well on x86 and after 30 secs or so, it crashes too . No error messages when building , loads of warnings though.
newbie
Activity: 9
Merit: 0
after 3 days  finally managed to compile for Windows 7 Grin.Ubuntu compile is  much easy.

https://mega.co.nz/#!70YlWBrQ!AY870Uc4d93Avr58K-J-10AlaeJkJj27gT-ZW0rSZQ0  absolutely no guarantee  Smiley
full member
Activity: 125
Merit: 100
Try the lookup-gap now on Compute 3.0 devices (Kepler kernel). The Titan kernel will follow soon... always autotune for different gap numbers, as configurations will differ wildly

NOTE: a gap value of 1 actually means no gap. ;-)  a gap value of 2 specifies that only every 2nd value is stored in the scratchpad (and the intermediate values being recomputed on the fly), cutting memory use in half. Values of up to 4 may make sense IMHO.  start with 2 and work your way up...

the more SMX your card has and the less memory there is, the more benefit you may see.. power consumption may also rise...  Users of 1GB and 2GB cards may finally see some better hash rates now.


Lookup-gap results on my GeForce GTX 660 Ti

Previously I was getting 2.5 kH (+50 core)

-L 2 = 3.6 kH
-L 3 = 3.9 kH

My card runs hotter like it does on scrypt.

I may have some time in the near future to actually contribute.  That's the only thing holding me back, time.  I love to optimize things and make them more efficient.  I've done cuda before to speed up some imagery compression and that was a lot of fun showing everyone that their process that took hours was improved to just a few minutes. 

Thanks
full member
Activity: 812
Merit: 102
Really? Dude drop the entitlement...

Excuse me, but you need to drop something yourself. That being the assumption that you know my motives or what type of person I am. You don't, so knock it off.

It was a sort of tongue-in-cheek comment, but I can see how the humor doesn't come across very well without knowing the intent of the post. If it were intended as you framed it, why would I follow up the comment with a polite request for updated binaries? Anyway I'm getting the prerequisites together as we speak s I can compile it myself. I was not aware that a trial of VS2010 could be used to compile, but now I know.

Thanks for the snap judgment, though. Makes my day when some snooty know-it-all gets something totally wrong. Next time drop the egoistic notion that you've got everything figured out, and you'll be less likely to make the same mistake again.

Thanks cbuchner1 for your continued effort.
legendary
Activity: 1400
Merit: 1050
How far can we go in the L value ? I am autotuning right now at L=6 and it seems to continue to increase hasrate (5.17 gtx780ti, no error so far)
full member
Activity: 120
Merit: 100
Astrophotographer and Ham Radioist!
And don't forget to input non-symbolical links in the compiling properties. It threw me a not found error until I changed from "/../.../OpenSSL" to the correct drive letter and folder. ANd so needs to be done to all per-requisites. The resulting file can be copied over into a new folder along with current .dll files. My compiler didn't provide them at all and I got an error launching cudaminer. But after the correct .dll's were in place it all went rather smoothly.

Still pondering what the best kernel config would be for my GTX 560 Ti Any help is appreciated there. I can't get it faster then 170 khashes on stock clocks. Anything higher and it crashes. Even with -25C outside and my radiators turned off. So it shouldn't be a heating issue ><
full member
Activity: 182
Merit: 100

You lost me at "Select which version you want to compile, and compile it." What "version" do I select where? And by compile one usually means hitting that green debug arrow right? =)
Actually I think I might have lost it at the extract part already: What should the final main folder look like? .\ with 6 folders ( CudaMiner-master, curl-7.29.0 ..., pthreads)? And if you open the miner-master folder you end up in the folder with code and compat?
Little more dummie style explanation would be appreciated =)
Right next to the green arrow, you see "Debug", change that to "Release", you can mess with the Win32 part but I never got the x64 working.
Your folder structure is correct.
Compiling = Hitting "Build -> Build Solution" or pressing F7.
Your .exe will appear in CudaMiner-master->Release->cudaminer.exe
Use the .dlls from the 18-12-13 release Smiley
newbie
Activity: 13
Merit: 0
I will install Cuda 5.5 when I get home and test again, but running individually didn't work ok for me so far.

Any tutorial on how to compile the latest source code from github ? I haven't done any C programming in ages, I am very rusty.

Quite easy:
Get Visual Studio Pro 2010 (Trial if you must)
Get Cuda SDK 5.5
Get the source code from github (google cudaminer)
Exctract source code in it's own folder
Get libraries from first post by Christian (almost 50mb)
Extract libraries to the same folder that the source code folder is in, so not to the source code folder but one up.
Open the .sln file in the source code with Visual Studio.
Select which version you want to compile, and compile it.
If it errors:
Go to your Microsoft Visual Studio 10.0\VC\bin folder and rename or relocate cvtres.exe and do the same with another cvtres.exe under Microsoft Visual Studio 10.0\VC\bin\amd64, then run Visual Studio 2010 and try compiling.

If it doesn't work, just change everything back.

You lost me at "Select which version you want to compile, and compile it." What "version" do I select where? And by compile one usually means hitting that green debug arrow right? =)
Actually I think I might have lost it at the extract part already: What should the final main folder look like? .\ with 6 folders ( CudaMiner-master, curl-7.29.0 ..., pthreads)? And if you open the miner-master folder you end up in the folder with code and compat?
Little more dummie style explanation would be appreciated =)
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
@Ultimist: if only you would read the last few pages...
hero member
Activity: 840
Merit: 1000
It's really unfortunate that this seems to have become an exclusive club for those who can compile the source code for all these new features. The rest of us are left out in the cold, having to wait months I guess while our cards become increasingly useless over time. I was really hoping to take advantage of the scrypt-jane for yac/qqcoin but by the time binaries are released with the latest features, the amount of yac/qqcoin I'll be able to earn daily with my GTX 670 won't be worth it anymore.

I'd like to throw in a request to perhaps update the main binaries a little more often than has been the case for the last month. New versions could always be labeled as incomplete/use at your own risk, etc...

Really? Dude drop the entitlement... it is still under heavy development and I bet you'd be one of the first to complain if something doesn't work right too. People have posted instructions on how to compile, and I think a few have posted binaries as well.
full member
Activity: 812
Merit: 102
It's really unfortunate that this seems to have become an exclusive club for those who can compile the source code for all these new features. The rest of us are left out in the cold, having to wait months I guess while our cards become increasingly useless over time. I was really hoping to take advantage of the scrypt-jane for yac/qqcoin but by the time binaries are released with the latest features, the amount of yac/qqcoin I'll be able to earn daily with my GTX 670 won't be worth it anymore.

I'd like to throw in a request to perhaps update the main binaries a little more often than has been the case for the last month. New versions could always be labeled as incomplete/use at your own risk, etc...
Jump to: