Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1055. (Read 3426921 times)

full member
Activity: 133
Merit: 100
my gtx660 went from 135 to 205 khash/s using -l K10x16 and -m 1 -H 1 -d 0 -i 0

x64 is slightly faster than x86 now, while it was slower with previous versions.

Gonna try MUBBLE86's config, lol Cheesy
full member
Activity: 126
Merit: 100
1
My cudaminer for gigabyte gtx 760 cudaminer.exe -H 1 -i 0 -t 1 -C 2 -l K12x16  - got from 180kh/s to 280!! and then i havent even OC yet. This made my day!

HOLY WTF

my 660 with this setting use 100% CPU !



WOOOOOOOW

which 660 are you using? My 680 is doing half lol. Though i have another rig i might build.

this was solo

on a pool many invalit Sad Sad Sad

with:  -H 1 -i 0 -t 1 -C 2 -l K10x16   190Khash/s
newbie
Activity: 6
Merit: 0
My cudaminer for gigabyte gtx 760 cudaminer.exe -H 1 -i 0 -t 1 -C 2 -l K12x16  - got from 180kh/s to 280!! and then i havent even OC yet. This made my day!

HOLY WTF

my 660 with this setting use 100% CPU !

http://abload.de/img/unbenanntorfhu.png

WOOOOOOOW

which 660 are you using? My 680 is doing half lol. Though i have another rig i might build.
newbie
Activity: 4
Merit: 0
Thanks guys, for the huge improvement!

I did some experiments with the 2013-12-18 version of cudaminer with my puny system:
Windows 8.1 Pro 32-Bit
GTX 650 Ti (1GB Version)
Nvidia Driver Version 331.65
System power consumption (idle): 93 Watts

cudaminer.exe -H 1 -i 0 -C 2 -l
Code:
CUDAMiner Version 2013-12-18                                 2013-12-10
CUDAMiner Launch config K16x8 K8x16 K2x24 K4x24 K4x32 K4x32 K8x24 K8x24 K8x16
GPU Core Clock (MHz) 1050 1050 1050 1050 1050 950 1050 1050 1050
GPU Memory Clock (MHz) 1400 1400 1400 1400 1400 1350 1400 1400 1400
Fan Speed (%)         100 100 100 100 100 100 100 100 100
GPU Temp (°C)         68 68 64 68 69 65 68 58 56
Memory used (Mb) 680 680 333 512 660 660 915 940 680
GPU Load (%)      99 99 96 99 99 99 99 99 99
Memory Cont. Load (%) 59 59 57 59 60 57 59 40 46

Power Cons. (Watt) 207 207 200 207 210 200 200-215 170 170
KH/s                  151 155 143 151 156 142 151 98 100
KH/s per Watt         0.73 0.75 0.72 0.73 0.74 0.71 0.73 0.58 0.59
I played with different -H settings but on my rig -H 1 is the way to go for max. KH/s.

Autotune isn't working very well for me, it always come up with odd settings like 5x20 and 20% less KH/s.

While playing with different launch configs, it seems for my card anything with a total of 128 apples? Wink like 8x16, 16x8 or 4x32 works best.
For my situation NxM=128 and NM very much.

With the new 2013-12-18 version I had to set the Fan Speed to 100%.
With the old 2013-12-10 version 27% was sufficient to keep the temperature below 60° C.
The Power Consumption went up, but we now get more KH/s per Watt with Version 2013-12-18. Great Job!
A bit strange, K8x24 with the new version gives wild alternating readings on my Watt measurement unit.
At any other config in the table the readings were alternating only about +/- 0,5 to 1%.

Is there a way to reduce the power consumption of the graphics card by deactivating of video signal components?
I got only one card, but I am searching for a way to have a energy saving mode optimized for mining, while the system is only mining.
The usual Windows system settings for energy saving regarding the graphics cards are to coarse grained, at least I can't find good ones since we dont want to hibernate or shut down the entire system for 0 KH/s.

Cheers, Oskar.
sr. member
Activity: 462
Merit: 250
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?

Try running with -i -- it'll reduce the speed a little bit.

I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh).

But with a single device running with -d1 -m1 -lK8x16
I'm seeing about 270-275 kh/s
and after a few minutes my card is at 78C.  It's freestanding (motherboard-on-a-table kind of thing).  78 is not something you want to stick your tongue on, but it shouldn't hurt the card.

What kind of temperatures are you seeing from nvidia-smi?  What hash rates and what config?

(And - for my own use - if you're running Linux, which driver are you using that works?  *grins*)

I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive.

I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1

Is there really no way for to have my 690 only mine at 60 % for example?

Are you comfortable editing the source?  There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling.

You could also try running with a kernel config with something like -lK1x16
and see if that slows it down and reduces the heat.

I never compiled anything on windows before, I don't mind editing the source tho, can you guide me through it?
full member
Activity: 126
Merit: 100
1
My cudaminer for gigabyte gtx 760 cudaminer.exe -H 1 -i 0 -t 1 -C 2 -l K12x16  - got from 180kh/s to 280!! and then i havent even OC yet. This made my day!

HOLY WTF

my 660 with this setting use 100% CPU !



WOOOOOOOW
full member
Activity: 126
Merit: 100
1
660 Ti (factory OC):  -l K 7x32 : 257 kHash/s   (and that is even with -i 1)

7 would be the number of SMX of the Kepler device, and 32 warps is something that autotune doesn't try yet.

Christian


waht for OC ?

with my stock 660 result invalite all ! with your setting.

I use: -i 0 -C 2 -l K80x2     163 kHash/s  version

old was cudaminer-2013-12-10.zip [32+64bit version] (10.4 MB) = 133 kHash/s
-

where can I read about the commands?
How to find the best setting out?

please help for better kHash/s rate
hero member
Activity: 756
Merit: 502
660 Ti (factory OC):  -l K 7x32 : 257 kHash/s   (and that is even with -i 1)

7 would be the number of SMX of the Kepler device, and 32 warps is something that autotune doesn't try yet.

Christian
full member
Activity: 140
Merit: 100
Sup guys. Got notebook and NVIDIA GTX 560M.
Got nothing in google for good options for this card. So i tried to change some settings and what i got:
-l 8x16  65kh/s, everything is ok
-l 168x2 94 kh/s, but half of blocks is not validated by CPU.
Any advices?
newbie
Activity: 6
Merit: 0
GT 550M -l 6x8 (x86)

cudaminer-2013-11-20: 24.5 kH/s
cudaminer-2013-12-18: 32.7 kH/s
member
Activity: 60
Merit: 10
Hi All,

Using a GTX 560 with no options and am steadily getting 115 khash/s. Does any have the same card but getting better results by adding arguments to the command? Any tips appreciated!

Also, I am using the 10-10-2013 version as any other version states it can find any cuda driver???
full member
Activity: 308
Merit: 146
Use 32 bit.^^ ? Lose C 2,. But you should def. get more

Do I need to use a new launch config for my GTX 670? Previously I used K14x16, and the current autotune outputs inaccurate hash rates (real hash rate is average 100kh/s lowee than table output). Right now I get 245kh/s from 180kh/s! Huge improvement
Hmm, running 32 or 64 bit doesn't seem to make a big difference - still 120-130 KH/s with the K14x16 config (which produces the most of the ones I've tested so far)

Perhaps it's because these are the 2GB (not the 4GB) versions of the 680M?

What's weird is during the autotune they were using more energy than in the current config (can watch instantaneous draw in watts on my UPS), however, I don't know what configs it was searching through...

I figured the 680M should be around the level of a 660TI, maybe a little less, however, I'm definitely still under that.

Oh well... at least heat isn't an issue :]  I probably shouldn't even be mining with this, however, even an extra 250KH/s counts for something
full member
Activity: 196
Merit: 100
550ti user here getting 90 kh/s with these settings. -H 1 -i 1 -d 0 -C 1 -l F8x16 -m 1     x64 version

If anyone with the same card is getting better please share. For anyone who was getting worse enjoy the boost.  Smiley
sr. member
Activity: 406
Merit: 250
Use 32 bit.^^ ? Lose C 2,. But you should def. get more

Do I need to use a new launch config for my GTX 670? Previously I used K14x16, and the current autotune outputs inaccurate hash rates (real hash rate is average 100kh/s lowee than table output). Right now I get 245kh/s from 180kh/s! Huge improvement
full member
Activity: 308
Merit: 146
Hmmm... the new version has obviously increased the hashing power of many cards, however, I see no difference (I'm using the 64 bit version) :-(

I'm running two 680m's with SLI disabled on a laptop

Previously K14x16 would net me around 128-130 kh/s, which is identical to what the new version is getting. Right now I'm running the following flags:

-d 0 -H 1 -l K14x16 -m 0 -i 0 -C 2

I'm running two instances (one -d 0 the other -d 1) as that keeps everything super stable.

Temps are fine with neither card getting above 73C or so, and this is a 330Watt AC adapter so I doubt that's the issue.

Autotune reports a hash rate of 200kh/s, however, with whatever config it picks the rate is always 120KH/s or less, which is worse than K14x16

Any ideas would certainly be appreciated, as I feel these cards should be getting 150-160 or so with the latest version
newbie
Activity: 28
Merit: 0
Uggh - banging my head against a wall here trying to get the latest version to compile!

I got configure to run cleanly (I think) once I'd changed the line endings from Windows to Linux type in several of the files, but it's still refusing to make Sad

$ make
Makefile:380: .deps/cudaminer-cpu-miner.Po: No such file or directory
Makefile:381: .deps/cudaminer-scrypt.Po: No such file or directory
Makefile:382: .deps/cudaminer-sha2.Po: No such file or directory
Makefile:383: .deps/cudaminer-util.Po: No such file or directory
make: *** No rule to make target `.deps/cudaminer-util.Po'. Stop.


This is using CUDA5.5, gcc-4.4, driver version 319.37 on Linux mint13-desktop 3.2.0-56-generic #86-Ubuntu SMP Wed Oct 23 09:20:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

I suspect there are still some incompatible line endings in one or more file but I can't find them Sad

Any ideas?

Thanks Smiley
newbie
Activity: 55
Merit: 0
Hello I'm having issues with Cudaminer with my new OS Cleaned and recently installed in another drive of my laptop. I'm using Win7 x64 (6Gb ram) Home Premium.

I've installed Cuda 5.5 from Nvidia and the latest drivers of my 560M GTX. Also installed Microsoft Visual 2008 - 2010 - 2012.

Downloaded Cuda miner latest files and its giving me. This error:
The application couldnt run. Error 0xc000007b. Please accept to close the box.

But if i run x86 files, works without problem.

What can be the problem?
Any idea?
newbie
Activity: 59
Merit: 0
Great work! Thanks for the optimisations, I'm getting ~25% performance improvement with no change to my settings.

One donation heading your way, good sir Smiley
newbie
Activity: 43
Merit: 0
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?

Try running with -i -- it'll reduce the speed a little bit.

I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh).

But with a single device running with -d1 -m1 -lK8x16
I'm seeing about 270-275 kh/s
and after a few minutes my card is at 78C.  It's freestanding (motherboard-on-a-table kind of thing).  78 is not something you want to stick your tongue on, but it shouldn't hurt the card.

What kind of temperatures are you seeing from nvidia-smi?  What hash rates and what config?

(And - for my own use - if you're running Linux, which driver are you using that works?  *grins*)

I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive.

I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1

Is there really no way for to have my 690 only mine at 60 % for example?

I'm guessing that should be -i 1
dga
hero member
Activity: 737
Merit: 511
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?

Try running with -i -- it'll reduce the speed a little bit.

I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh).

But with a single device running with -d1 -m1 -lK8x16
I'm seeing about 270-275 kh/s
and after a few minutes my card is at 78C.  It's freestanding (motherboard-on-a-table kind of thing).  78 is not something you want to stick your tongue on, but it shouldn't hurt the card.

What kind of temperatures are you seeing from nvidia-smi?  What hash rates and what config?

(And - for my own use - if you're running Linux, which driver are you using that works?  *grins*)

I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive.

I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1

Is there really no way for to have my 690 only mine at 60 % for example?

Are you comfortable editing the source?  There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling.

You could also try running with a kernel config with something like -lK1x16
and see if that slows it down and reduces the heat.
Jump to: