Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1054. (Read 3426921 times)

sr. member
Activity: 462
Merit: 250
Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:

511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.

I have the card on a regular desktop case tho. My final settings are:
cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32

Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.

Not yet.  I'm finally managing to reproduce your thermal overload problem on my own setup with 2x GTX690s:

| 82%   90C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 57%   78C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 57%   79C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 54%   75C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |

Toasty.  That 90C isn't good unless planning on making tea on your computer.

My kernel is going to make your display laggy even in interactive mode, unfortunately.  The only thing I can think of to try to reduce both power and lagginess without changing the code is to try -l K2x32,K2x32 or something similar.  Have you given that a shot?  It should reduce the duration of time that the kernel runs and increase the relative amount of time interactive mode spends telling the GPU to not do mining.  Could you let me know how that works?

I'm tied up for a while, but I'll see if now that I have my 690 running I can figure out any efficiency gains for it.  Don't hold your breath, though:  The 690 is pretty similar to the Grid K2 that I was optimizing for before.  I think there are gains to be had for GF110 devices, but maybe not GF104.

  -Dave



I wouldn't dare be disappointed, your help is much appreciated! I'm just trying to find out if it would be possible for me to mine with my 690.
The setting you provided indeed helped a lot with my display lag (Performance dropped to 419 Kh/s, but I don't mind that), unfortunately temps climb to 90 after 2 minutes or so, so still no-go :-).

I guess I'm out of luck (unless you find something). Thanks a lot for having a look!
dga
hero member
Activity: 737
Merit: 511
Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:

511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.

I have the card on a regular desktop case tho. My final settings are:
cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32

Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.

Not yet.  I'm finally managing to reproduce your thermal overload problem on my own setup with 2x GTX690s:

| 82%   90C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 57%   78C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 57%   79C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |
| 54%   75C  N/A     N/A /  N/A |   1087MiB /  2047MiB |     N/A      Default |

Toasty.  That 90C isn't good unless planning on making tea on your computer.

My kernel is going to make your display laggy even in interactive mode, unfortunately.  The only thing I can think of to try to reduce both power and lagginess without changing the code is to try -l K2x32,K2x32 or something similar.  Have you given that a shot?  It should reduce the duration of time that the kernel runs and increase the relative amount of time interactive mode spends telling the GPU to not do mining.  Could you let me know how that works?

I'm tied up for a while, but I'll see if now that I have my 690 running I can figure out any efficiency gains for it.  Don't hold your breath, though:  The 690 is pretty similar to the Grid K2 that I was optimizing for before.  I think there are gains to be had for GF110 devices, but maybe not GF104.

  -Dave

sr. member
Activity: 462
Merit: 250
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?

Try running with -i -- it'll reduce the speed a little bit.

I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh).

But with a single device running with -d1 -m1 -lK8x16
I'm seeing about 270-275 kh/s
and after a few minutes my card is at 78C.  It's freestanding (motherboard-on-a-table kind of thing).  78 is not something you want to stick your tongue on, but it shouldn't hurt the card.

What kind of temperatures are you seeing from nvidia-smi?  What hash rates and what config?

(And - for my own use - if you're running Linux, which driver are you using that works?  *grins*)

I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive.

I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1

Is there really no way for to have my 690 only mine at 60 % for example?

Are you comfortable editing the source?  There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling.

You could also try running with a kernel config with something like -lK1x16
and see if that slows it down and reduces the heat.

I never compiled anything on windows before, I don't mind editing the source tho, can you guide me through it?

I don't think it's a great idea.

I'm confused about your performance.  I now have my 690 working well.  It's at 85degC on one half and 77degC on the other half doing 550 kh/sec (with no display attached).

That's with kernel config -l K8x32,K8x32

What temperature are you seeing?

If you want to edit the source, go into kepler_kernel.cu and look for the line that says
Sleep(1);

and change it to Sleep(10);

and then recompile.  See how much of a temperature reduction that gives you vs performance drop and play from there.

But, as I said - your performance seems low.  You might have a ventilation problem that's causing some thermal shutdown?

  -Dave

I didn't do any tweaking by the way this is autotune.
Now that I use your -l K8x32,K8x32 it performs a bit cooler AND faster:

511 KH/s, Core one on 84 and Core two on 85 degrees. Still a bit hot too my taste for every day use.

I have the card on a regular desktop case tho. My final settings are:
cudaminer.exe -H 1 -C 2 -t 1 -i 1 -l K8x32,K8x32

Any other suggestions? Also this is with interactive on, but I notice my pc becomes very laggy.

dga
hero member
Activity: 737
Merit: 511
Does anyone have settings for a 690 that doesn't absolutely melt the device? How can I throttle this thing to make it less intense?

Try running with -i -- it'll reduce the speed a little bit.

I _just_ got my 690 up and running and am still having driver issues with it (I can't run on both devices at the same time, sigh).

But with a single device running with -d1 -m1 -lK8x16
I'm seeing about 270-275 kh/s
and after a few minutes my card is at 78C.  It's freestanding (motherboard-on-a-table kind of thing).  78 is not something you want to stick your tongue on, but it shouldn't hurt the card.

What kind of temperatures are you seeing from nvidia-smi?  What hash rates and what config?

(And - for my own use - if you're running Linux, which driver are you using that works?  *grins*)

I tried running "-i --" but that just put interactive mode to 0 and made my pc unresponsive.

I'm seeing 400 KH/s in total with default settings on my 690 on Windows 8.1

Is there really no way for to have my 690 only mine at 60 % for example?

Are you comfortable editing the source?  There's an easy change to accomplish what you want, but it's a bit of a hack and requires recompiling.

You could also try running with a kernel config with something like -lK1x16
and see if that slows it down and reduces the heat.

I never compiled anything on windows before, I don't mind editing the source tho, can you guide me through it?

I don't think it's a great idea.

I'm confused about your performance.  I now have my 690 working well.  It's at 85degC on one half and 77degC on the other half doing 550 kh/sec (with no display attached).

That's with kernel config -l K8x32,K8x32

What temperature are you seeing?

If you want to edit the source, go into kepler_kernel.cu and look for the line that says
Sleep(1);

and change it to Sleep(10);

and then recompile.  See how much of a temperature reduction that gives you vs performance drop and play from there.

But, as I said - your performance seems low.  You might have a ventilation problem that's causing some thermal shutdown?

  -Dave
newbie
Activity: 43
Merit: 0
No luck with my 580 and the latest version, its basically cooking itself with no performance increase.
gtx660 got a nice performance bump  Smiley

Yeah, just use the 12-10 version until/if he can fix the increased temps w/o performance increase on older cards, or at least on the 580 since my result with the new version is the same ^^ I mean, if there was an equal performance increase for the heat I'd be fine with it, but absolute waste when it performs the same/slightly less with all that extra heat.
newbie
Activity: 32
Merit: 0
all Tongue sha256 and scrypt types


but at what hash rate? for 780gtx
newbie
Activity: 10
Merit: 0
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue.

Heres proof...




Your numbers are better than my 670, everything looks gravy to me so long as you are intending to push your card and sacrifice a little of it's lifespan.

Well thats what I was saying in a post earlier, since the update of 12/18 my gpu is now taking 103%+ power when it used to sit nicely at 95%
And if i up my voltages to where they normally are for stable gaming my core speed drops and so does my hashrate. So really right now these settings will instantly crash with any game or anything intense. Thats because my voltage is so low because I wanna keep my clock speed high for mining, even though it doesnt get nearly as high as it used to. As for lifespan, this is a MSI Power Edition card so I'm confident with the integrity however I don't understand how my power usage shot up like by a full 10% after 12/18 with the exact same settings I was using in 12/10.

EDIT: Also, what pool are you using because I was seeing the same results all yesterday. I was getting just chunks of stales in 5-10 when normally I'm at a 99% valid yesterday was like at 96%.

I'm assuming that changes made to the most recent version caused/allowed our cards to reach the full TDP that we have set. Before, I could set my power target to 140% and I would only use about 110% under 99% GPU load. Now that my hashrate has jumped up, so has the TDP usage and temps as well. I just sort of assumed that this was normal and that this build allowed our cards to run open throttle. I'd be pleasantly surprised if something could be changed that brought power consumption down while retaining the current performance.
hero member
Activity: 821
Merit: 503
all Tongue sha256 and scrypt types
newbie
Activity: 32
Merit: 0
Can you mine other altcoins besides litecoins with cudaminer?
newbie
Activity: 3
Merit: 0
No luck with my 580 and the latest version, its basically cooking itself with no performance increase.
gtx660 got a nice performance bump  Smiley
full member
Activity: 200
Merit: 100
Presale Starting May 1st
i have error at cudaminer:
please help.




setting to high
do you know setting for geforce 8400m ?

can't use x4. Stick to x3 launch configs
thanks. so this: and cpu  to to80%
i am using laptop.
hero member
Activity: 756
Merit: 502
i have error at cudaminer:
please help.




setting to high
do you know setting for geforce 8400m ?

can't use x4. Stick to x3 launch configs
full member
Activity: 200
Merit: 100
Presale Starting May 1st
member
Activity: 84
Merit: 10
SizzleBits
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue.

Heres proof...




Your numbers are better than my 670, everything looks gravy to me so long as you are intending to push your card and sacrifice a little of it's lifespan.

Well thats what I was saying in a post earlier, since the update of 12/18 my gpu is now taking 103%+ power when it used to sit nicely at 95%
And if i up my voltages to where they normally are for stable gaming my core speed drops and so does my hashrate. So really right now these settings will instantly crash with any game or anything intense. Thats because my voltage is so low because I wanna keep my clock speed high for mining, even though it doesnt get nearly as high as it used to. As for lifespan, this is a MSI Power Edition card so I'm confident with the integrity however I don't understand how my power usage shot up like by a full 10% after 12/18 with the exact same settings I was using in 12/10.

EDIT: Also, what pool are you using because I was seeing the same results all yesterday. I was getting just chunks of stales in 5-10 when normally I'm at a 99% valid yesterday was like at 96%.
full member
Activity: 126
Merit: 100
1
full member
Activity: 200
Merit: 100
Presale Starting May 1st
i have error at cudaminer:
please help.


member
Activity: 84
Merit: 10
SizzleBits
using -H 1 -i 0 -C 1 -D -l K14x16 with my 660ti. Its stable but the numbers dont look right and I still cant get my max overclock with the updated version of cudaminer. Would be nice if someone knew anything about that but apparently my buddy and I are the only ones with this issue.

Heres proof...


global moderator
Activity: 3934
Merit: 2676
Join the world-leading crypto sportsbook NOW!
https://bitcointalksearch.org/topic/m.4040564

Just posting the following for a newbie from the above thread:



It's urgent I believe. I downloaded the latest version for use with my Fermi GTX 470, and thought "Hey great, 2.5 khs more is better than nothing."

My fan runs at max, so I couldn't tell, but when I glanced at my GPU temp a few minutes later, it was 98-99c!!! Far over the max of 95c and my personal max of 90c.

Too long at this rate would kill the GPU, and this new version needs strict labeling as KELPER ONLY excepting a few possible cases.

 Lips sealed

Thanks!


`
~
EDIT:
I keep having "duplicate post," or "you just posted" when it is not and I did not.
 Here is the link https://bitcointalksearch.org/topic/ann-cudaminer-ccminer-cuda-based-mining-applications-windowslinuxmacosx-167229
newbie
Activity: 6
Merit: 0
with a couple of adjustments i got my 680 up to ~355 k/hashes

-i 0 -C 2 -m 1 -H 1 -l K16x16

http://flic.kr/p/irGqEi
hero member
Activity: 756
Merit: 502
Jump to: