Author

Topic: Improved scrypt miner for nvidia GPUs (Read 1714 times)

newbie
Activity: 19
Merit: 0
December 18, 2013, 09:40:47 PM
#8
I think AMD's still better, but the gap is a lot smaller than it used to be.
For instance my 660 Ti's 230ish khash puts in it in Radeon 7790, or 6850 territory. Both cards are usually cheaper than a 660Ti and they draw less power. That claim on 650 kHash from a 780Ti puts that card very close to a 7950. However, again the 7950 can be had for less, and it draws less power.

Ref:
https://litecoin.info/Mining_hardware_comparison
https://bitcointalksearch.org/topic/m.4028914
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/15

Theoretically, I think the R9 still has the advantage even with the price differential. However, all of this is prototype code (I'm talking about both the AMD and the nVidia miners here), so more software optimization or driver updates on either end would blow the gates wide open.

newbie
Activity: 59
Merit: 0
December 18, 2013, 07:05:50 PM
#7
The price of certain AMD cards is massively inflated and they're virtually impossible to get because they're such good miners. Does this improved code mean there might be the odd nvidia bargain out there now, or is the hashes per dollar ratio still considerably better on AMDs even with the improvement?
newbie
Activity: 19
Merit: 0
December 18, 2013, 06:53:04 PM
#6
Thanks for putting it together! (Also thanks to C. Buchner for somehow integrating it into Cudaminer so fast.)

I compiled from source on Visual Studio 2012, and I do indeed get the speedup from ~160 kHash/Sec to ~230kHash/sec on my card:

Code:
[2013-12-18 xx:48:12] GPU #0: GeForce GTX 660 Ti, 3154816 hashes, 230.69 khash/s
[2013-12-18 xx:48:17] GPU #0: GeForce GTX 660 Ti, 1160320 hashes, 229.80 khash/s
[2013-12-18 xx:48:21] GPU #0: GeForce GTX 660 Ti, 940800 hashes, 229.74 khash/s
[2013-12-18 xx:48:38] GPU #0: GeForce GTX 660 Ti, 3926272 hashes, 230.79 khash/s
[2013-12-18 xx:48:54] GPU #0: GeForce GTX 660 Ti, 3600128 hashes, 230.78 khash/s
(BTW I also compiled it just fine with the VS 2012 toolchain - no change from the 2010 in terms of performance.)

This is on a "stock" eVGA 660 Ti. (There was never a "reference" 660Ti, but my card is clocked the same as what is shown on nVidia's web site.) My settings are as follows:
Cudaminer settings: Running the 32-bits cudaminer, -d 0 -i 0 -C 2 -H 2 -l K14x14
CPU: Intel Core i5 2500k OC @ 4.3 GHz, 3 other threads doing CPU mining
Motherboard: ASUS Maximus V GENE
RAM: 8 GB DDR3-1866 @ 9-10-9-27
HDDs: (A low-performance mess - you don't want to know.)
OS: Windows 8.1 Pro 64 bit


My display runs on a GeForce 8500 GT while the 660 Ti does all the work. Yes - it surprises me I'm not getting random crashes by running cards that are ~5 generations apart.

As I'm in cryptocoins for fun and not for profit, I'd like to take a crack at improving this over the Christmas holiday:
Code:
[2013-12-18 xx:55:34] GPU #0: GeForce GTX 550 Ti, 1584128 hashes, 79.13 khash/s
I know the keplerminer code wasn't designed for Compute Capability 2.0, but I'm sure I can think of something. I dabbled in FPGA accelerators a couple years ago, so this should be fun.

[/s]

Hmm I just looked at the code and it looks like the newest revision also implemented the changes for the other versions of the CUDA cards... I still want to get in to GPU or other parallel stuff to keep my head sharp on the topic, so maybe I'll give it another look over anyway. ...if I had access to my old hardware lab I'd try FPGAs, but oh well...

dga
hero member
Activity: 737
Merit: 511
December 18, 2013, 05:37:10 PM
#5
I "implemented" your code into cpuminer on an EC2 yesterday and can confirm the 220kh/s. Very nice!  Smiley
Thanks for your work on this and for sharing.  Very Cool.

Although it seems to slow down over time but IIRC you mentioned that in the readme. After a restart of the miner it's running normal again.

Now if only cryptocurrencies all around wouldn't be crashing like no tomorrow right now...  Wink


Grab the updated version of CudaMiner and compile with CUDA 5.5.  Shouldn't have a slowdown problem any more - I think my issue was the old CUDA on my GTX 650Ti machine.
newbie
Activity: 27
Merit: 0
December 18, 2013, 04:35:39 AM
#4
I "implemented" your code into cpuminer on an EC2 yesterday and can confirm the 220kh/s. Very nice!  Smiley
Thanks for your work on this and for sharing.  Very Cool.

Although it seems to slow down over time but IIRC you mentioned that in the readme. After a restart of the miner it's running normal again.

Now if only cryptocurrencies all around wouldn't be crashing like no tomorrow right now...  Wink
dga
hero member
Activity: 737
Merit: 511
December 16, 2013, 09:07:48 PM
#3
Eh, spam is a huge and annoying problem.

Think of it instead as an early present to the newbies and those nice enough to read the newbies forum. :-)

(The real gift, of course, will come when Christian finishes getting it in an easier-to-use form in CudaMiner, which looks to be progressing very rapidly.)
member
Activity: 70
Merit: 10
December 16, 2013, 08:47:31 PM
#2
Hahahaha

To think that posts like this ^^ have to go to "newbie" section.

Whatta dump!
dga
hero member
Activity: 737
Merit: 511
December 16, 2013, 08:43:40 PM
#1
Hey, folks - didn't have an account, so sorry for throwing this in the Newbies section instead of alt-coins, but:

I released my code for an improved scrypt miner for NVidia GPUs with the Kepler (GF104) and newer (GF110) architectures.  I alluded to it a few days ago in a post about mining on Amazon EC2, but finally got the code cleaned up enough to release today.  Description of the algorithmic changes and links to code here:

http://da-data.blogspot.com/2013/12/inside-better-cuda-based-scrypt-miner.html

Expect something like a 20-40% speedup for compatible cards.  Kepler is pretty new - I've tested it on a MacBook GT550m (from 34k with cudaminer to 63k with my code), on Amazon EC2's Grid K2 cards (150 to 220kh/sec), and on a Tesla (something over 330, but I never benchmarked cudaminer).

It is _not_ well integrated with an existing miner.  The README includes directions on how to hack it in to cpuminer, but it's kind of nasty and for people at least vaguely familiar with a compiler.  I believe that the author of CudaMiner is in the process of incorporating my code improvements into his miner software (itself a derivative of cpuminer), so I'd keep an eye out for an announcement from him if you want something easy to use.

  -Dave
Jump to: