Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1005. (Read 3426921 times)

hero member
Activity: 756
Merit: 502
[I just did a few test with the windows version of the new config and It does not seem to work exactly that way.
I have a 780ti, with the old config I use T15x32 (480 warps)
But with the new config I get only 962warps (should I get 480x4 ?)

Ah yes I should have raised the total warp limit from 1024 to 4096... this affects scrypt mainly, because the memory limit just shrank to 1 GB (from 4GB previously). Oops. ;-) Will fix tomorrow...

EDIT: commit is in. it's 11:45 PM, so it isn't actually "tomorrow" yet.

Also some of you might want to check if it works for you to specify --algo=scrypt:2048 (or whatever "N" value it is currently at) to mine VertCoin. You can now directly give the N parameter if needed (not the N-factor like with scrypt-jane).

legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
The autotuning takes a very long time with a very unstable gpu usage (it seems to oscillate constantly between 0 and 100) giving very unconsistant results.

The resolution or density or however you want to call it for the acceptable kernel configs are 4 times higher, therefore autotune tries 4 times as much configs to test so it will take a while.

In my case autotune earlier only detected 13 warps (K13x1) while the manual K14x1 yielded much better results, so I tried some manual configs this time as well to see if it really found the best scenario and apparently it did because it came up with K59x1:
Code:
setup   kH/s    VRAM
K14x4 2.55 1820
K15x4 2.57 1948 jitters
K58x1 2.64 1884
K59x1 2.68 1916
K60x1 2.62 1948 jitters
K61x1 2.27 1948
K8x7 2.49 1820
K10x6 2.60 1948 jitters
K6x10 2.50 1916 jitters
K2x28 2.27 1820
K2x29 2.26 1884
K2x30 2.24 1948 jitters
*jitter: the hashrate varies somewhat greatly.

TL;DR: Autotune is slower, but better.
legendary
Activity: 1400
Merit: 1050
There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).



I just did a few test with the windows version of the new config and It does not seem to work exactly that way.
I have a 780ti, with the old config I use T15x32 (480 warps)
But with the new config I get only 962warps (should I get 480x4 ?) meaning I can't go to 60x32 (which isn't reported as a working mode) but only to 60x16 (this one gives a slightly better hash rate than the original T15x32).

Concerning the scrypt-jane, the autotuning seems broken on windows (I didn't tried it yet on linux).
The autotuning takes a very long time with a very unstable gpu usage (it seems to oscillate constantly between 0 and 100) giving very unconsistant results.

None of the results goes higher than 0.7khash while it should be around 1.65khash for the best modes (on windows)
Thanks for you help (and the good work)

hero member
Activity: 756
Merit: 502
I'm gonna sit on mine. Once GPU mining is out of the picture I think the prices will rise as it will still be an attractive currency to CPU purists. Hoping for .001 or better  Grin

If wou want this to improve, get involved in Yacoin client (wallet) development. They really need some upgrades/improvements to their client and to their Piece of Shit.. pardon Proof of Stake system Wink

Christian
full member
Activity: 173
Merit: 100
I'm gonna sit on mine. Once GPU mining is out of the picture I think the prices will rise as it will still be an attractive currency to CPU purists. Hoping for .001 or better  Grin
hero member
Activity: 756
Merit: 502
Where are yall trading your yacoins? I just sent a quick test deposit of 11 to bter.com because I heard it was a bit sketchy but I havent been able to find any other place that takes yacoin.

bter.com for me too. That's my one lucky trade. The other sell orders are sitting unfulfilled at the moment, waiting for a buyer with fat fingers ;-)

Date                          Type  Pair          Price                  Amount                      Total   
2014-01-14 04:44:49    Sell    YAC/BTC    0.000049 BTC      2,850.000000 YAC        0.1397 BTC

Now I really wish the YAC/BTC price hadn't fallen so much.

I heard cryptsy is planning to list YAC again, after some maintenance/upgrade work on their servers.

Christian
member
Activity: 84
Merit: 10
SizzleBits
Where are yall trading your yacoins? I just sent a quick test deposit of 11 to bter.com because I heard it was a bit sketchy but I havent been able to find any other place that takes yacoin.
hero member
Activity: 756
Merit: 502
Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

May I recommend starting individual cards with -d 0 -l K5x32 -C 2 -H 1 and the other instance running -d 1 -l K5x32 -C 2 -H 1.

Ever since I migrated to CUDA 5.5 I have had inexplicable issues when trying to run multiple cards in a single miner instance.

Use the official 2013-12-18 version, that is definitely the fastest for scrypt.

Christian
hero member
Activity: 756
Merit: 502
Scrypt autotune with latest github:
Code:
[2014-01-17 18:13:01] GPU #0:  106.47 khash/s with configuration K30x21
[2014-01-17 18:13:01] GPU #0: using launch configuration K30x21

But I really don't mind it since I'm up to 2.76kH/s with scrypt-jane!

Yeah, getting script levels back to old levels (or better) would be one important TODO before making a new release.

At the moment my focus is still on scrypt-jane. A LOOKUP_GAP implementation is due this weekend. Maybe I also have a look into Keccak on GPU.

Christian
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Scrypt autotune with latest github:
Code:
[2014-01-17 18:13:01] GPU #0:  106.47 khash/s with configuration K30x21
[2014-01-17 18:13:01] GPU #0: using launch configuration K30x21

But I really don't mind it since I'm up to 2.76kH/s with scrypt-jane!
ktf
newbie
Activity: 24
Merit: 0
It was the latest version , but yes, without jane support, I didn't compile a version with it.

 I tried a version with scrypt-jane uploaded by someone else on the thread and I get :

[2014-01-17 19:32:36] GPU #0: GeForce GTX 660, 26.44 khash/s
[2014-01-17 19:33:36] GPU #1: GeForce GTX 660, 26.35 khash/s
[2014-01-17 19:33:36] GPU #0: GeForce GTX 660, 26.32 khash/s
[2014-01-17 19:34:05] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1981, s=0)!
[2014-01-17 19:34:06] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1810, s=1)!
[2014-01-17 19:34:36] GPU #0: GeForce GTX 660, 26.39 khash/s
[2014-01-17 19:34:37] GPU #1: GeForce GTX 660, 26.35 khash/s
[2014-01-17 19:35:36] GPU #0: GeForce GTX 660, 26.52 khash/s
[2014-01-17 19:35:36] GPU #1: GeForce GTX 660, 26.49 khash/s
[2014-01-17 19:35:45] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1692, s=1)!
[2014-01-17 19:35:50] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1637, s=1)!
[2014-01-17 19:36:04] GPU #1: GeForce GTX 660 result does not validate on CPU (i=1119, s=1)!
[2014-01-17 19:36:22] Stratum detected new block
[2014-01-17 19:36:22] GPU #0: GeForce GTX 660, 26.47 khash/s
[2014-01-17 19:36:22] GPU #1: GeForce GTX 660, 26.44 khash/s
[2014-01-17 19:36:26] Stratum detected new block
[2014-01-17 19:36:26] GPU #1: GeForce GTX 660, 25.87 khash/s
[2014-01-17 19:36:26] GPU #0: GeForce GTX 660, 26.08 khash/s
[2014-01-17 19:37:25] GPU #1: GeForce GTX 660, 26.46 khash/s
[2014-01-17 19:37:25] GPU #0: GeForce GTX 660, 26.46 khash/s
[2014-01-17 19:38:06] Stratum detected new block

but as you can see, lots of errors. I just tried with K20x32 :

[2014-01-17 19:47:58] GPU #0: GeForce GTX 660, 30556160 hashes, 630.38 khash/s
[2014-01-17 19:48:03] GPU #0: GeForce GTX 660 result does not validate on CPU (i=17502, s=1)!
[2014-01-17 19:48:04] GPU #1: GeForce GTX 660 result does not validate on CPU (i=4428, s=0)!
[2014-01-17 19:48:09] GPU #1: GeForce GTX 660 result does not validate on CPU (i=18108, s=1)!
[2014-01-17 19:48:10] Stratum detected new block
[2014-01-17 19:48:10] GPU #1: GeForce GTX 660, 20848640 hashes, 673.53 khash/s
[2014-01-17 19:48:10] GPU #0: GeForce GTX 660, 7475200 hashes, 618.09 khash/s
[2014-01-17 19:48:10] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1735, s=1)!
[2014-01-17 19:48:12] GPU #1: GeForce GTX 660 result does not validate on CPU (i=14795, s=0)!
[2014-01-17 19:48:14] GPU #0: GeForce GTX 660 result does not validate on CPU (i=18516, s=0)!
[2014-01-17 19:48:22] GPU #0: GeForce GTX 660 result does not validate on CPU (i=18900, s=1)!
[2014-01-17 19:48:24] GPU #1: GeForce GTX 660 result does not validate on CPU (i=15159, s=0)!
[2014-01-17 19:48:26] GPU #1: GeForce GTX 660 result does not validate on CPU (i=19189, s=1)!
[2014-01-17 19:48:27] GPU #0: GeForce GTX 660 result does not validate on CPU (i=16918, s=0)!
[2014-01-17 19:48:31] GPU #1: GeForce GTX 660 result does not validate on CPU (i=8619, s=1)!
[2014-01-17 19:48:44] GPU #0: GeForce GTX 660 result does not validate on CPU (i=15977, s=1)!
[2014-01-17 19:48:45] GPU #1: GeForce GTX 660 result does not validate on CPU (i=8294, s=0)!
[2014-01-17 19:48:46] GPU #0: GeForce GTX 660 result does not validate on CPU (i=5393, s=0)!
[2014-01-17 19:48:49] GPU #0: GeForce GTX 660 result does not validate on CPU (i=17230, s=0)!
[2014-01-17 19:48:50] GPU #1: GeForce GTX 660 result does not validate on CPU (i=6325, s=1)!
[2014-01-17 19:48:57] GPU #1: GeForce GTX 660 result does not validate on CPU (i=856, s=0)!
[2014-01-17 19:48:58] GPU #0: GeForce GTX 660 result does not validate on CPU (i=1391, s=1)!
[2014-01-17 19:49:08] GPU #1: GeForce GTX 660 result does not validate on CPU (i=9682, s=0)!
[2014-01-17 19:49:09] GPU #0: GeForce GTX 660 result does not validate on CPU (i=5051, s=1)!
[2014-01-17 19:49:10] GPU #1: GeForce GTX 660, 40427520 hashes, 668.99 khash/s
[2014-01-17 19:49:11] GPU #0: GeForce GTX 660, 37089280 hashes, 606.77 khash/s
[2014-01-17 19:49:11] Stratum detected new block
[2014-01-17 19:49:11] GPU #0: GeForce GTX 660, 102400 hashes, 416.21 khash/s
[2014-01-17 19:49:11] GPU #1: GeForce GTX 660, 655360 hashes, 684.72 khash/s
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
That's weird, scrypt-jane hashrates while it's accepting shares for a litecoin pool.

Edit: maybe it's the latest version and so it should be K20x32, not K5x32
hero member
Activity: 756
Merit: 502
Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

on two GTX 660 cards :

[2014-01-17 18:41:48] accepted: 4/4 (100.00%), 5.58 khash/s (yay!!!)

yeah, that looks like decent hash rates for scrypt-jane. But I don't see you enabling scrypt-jane support at all.

What version are you running and on what OS?

ktf
newbie
Activity: 24
Merit: 0
Hi guys. Thank you all for your work on cudaminer , helping us with Nvidia cards a bit Smiley I've read quite a bit of the thread and I feel I am doing something  wrong here. I am using cudaminer like this :

cudaminer.exe  -i 0 -H 1 -l K5x32,K5x32 -o stratum+tcp://europe.mine-litecoin.com:80 -u user

on two GTX 660 cards :

[2014-01-17 18:38:01] GPU #1: GeForce GTX 660, 40960 hashes, 2.88 khash/s
[2014-01-17 18:38:02] GPU #0: GeForce GTX 660, 51200 hashes, 3.16 khash/s
[2014-01-17 18:39:05] GPU #1: GeForce GTX 660, 174080 hashes, 2.84 khash/s
[2014-01-17 18:39:15] GPU #0: GeForce GTX 660, 194560 hashes, 2.75 khash/s
[2014-01-17 18:39:51] Stratum detected new block
[2014-01-17 18:39:55] GPU #0: GeForce GTX 660, 102400 hashes, 2.71 khash/s
[2014-01-17 18:39:55] GPU #1: GeForce GTX 660, 133120 hashes, 2.71 khash/s
[2014-01-17 18:40:56] GPU #0: GeForce GTX 660, 163840 hashes, 2.80 khash/s
[2014-01-17 18:40:56] GPU #1: GeForce GTX 660, 163840 hashes, 2.75 khash/s
[2014-01-17 18:41:45] GPU #0: GeForce GTX 660, 138240 hashes, 2.82 khash/s
[2014-01-17 18:41:48] accepted: 4/4 (100.00%), 5.58 khash/s (yay!!!)
[2014-01-17 18:41:59] GPU #1: GeForce GTX 660, 168960 hashes, 2.84 khash/s
[2014-01-17 18:42:49] GPU #0: GeForce GTX 660, 174080 hashes, 2.89 khash/s
[2014-01-17 18:43:00] GPU #1: GeForce GTX 660, 174080 hashes, 2.92 khash/s

 I've seen people getting 70khash/s on those and others reaching 500+ on 780. Am I missing something obvious here ? At this rate I doubt I even recover electricity costs .
full member
Activity: 182
Merit: 100
There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).


So I assume if I run autotune with this build it'll use more memory than it does currently?
Currently my build uses 2619MB/3GB with T9x2.

Also after 2 days mining with no block found, I quickly found a YACoin block today Smiley
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
It is however a nightmare for maintainers of the Google spreadsheets Wink

That would be only me, at least for the two combo sheets, but for the greater good I'll think of something!

This is great news though, and I was just thinking about this yesterday while toying around with N14+ benchmarks it if were possible to increase the resolution of the kernel configs so we might be able to squeeze out more VRAM usage, and here it is Smiley
hero member
Activity: 756
Merit: 502
There was a breaking change today regarding the format of launch configs for David Andersen's kernels.

This has advantages because of more fine grained control and memory allocation. It is however a nightmare
for maintainers of the Google spreadsheets Wink

Code:
for scrypt-jane the equivalent config to B x W is B x 4*W and for scrypt it is 4*B x W
so e.g for Yacoin replace -l K2x8 with -l K2x32
and for Litecoin -l K2x32 becomes -l K8x32.

this affects K,T,X kernel configs only (these are derived from David's code) - and only when you run a
github version from today or later.

or you can simply autotune again to find a good config, saving you the hassle of converting...

The main advantage to this is that users of 1GB cards can now use up to 10% more memory
than before (memory is now allocated in increments of 32MB for Yacoin - previously it was 128MB).

legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
I think the card's (stock) BIOS would not push the past the reliability voltage anyway. So even if one sets the slider extremely high, the clocks you can actually reach will be much lower. So one needs to overvolt, or a install seriously modded BIOS to work around that restriction.

As far as I know the stock BIOS is not limiting you from increasing the core clock speed even after it hits the max stock voltage, and we would only require overvoltage during normal circumstances to get the clock stable, but for scrypt-jane it could be stable without massive overvoltage.

The poster (Acad) was kind enough to provide a screenshot and while it could be considered a 286 Mhz OC if we were to compare it to the boost clock, it's still quite amazing. He noted "Driver will crash if you do anything else that uses the GPU ex a website " which again means that it's only stable for scrypt-jane.

newbie
Activity: 4
Merit: 0
My gtx590 gain 0.68-0.7 kh/s for each GF110 (mem 2x1.5GB) with -l X5x2 -C 1, so total kh/s for gtx590 is about 1.38 kh/s. Low mem is pain :/
Thanks to all developers, donation sent. Patoberli - thanks for build!
hero member
Activity: 756
Merit: 502

I'm not saying that makes that +352 Mhz core OC legit, but it is within the realm of possibility when the card is only being used for scrypt-jane.

I think the card's (stock) BIOS would not push the past the reliability voltage anyway. So even if one sets the slider extremely high, the clocks you can actually reach will be much lower. So one needs to overvolt, or a install seriously modded BIOS to work around that restriction.
Jump to: