Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1025. (Read 3426921 times)

sr. member
Activity: 350
Merit: 250
cbuchner1 thi is my 780 running at over 3.7khash/s!
it is just a straight screen shot not cropped sorry

http://s29.postimg.org/62ttjbizb/Screenshot_from_2014_01_09_14_49_51.png

./cudaminer --algo=scrypt-jane -H 0 -i 0 -d 0 -l T20x1 -o http://127.0.0.1:3339 -u user -p pass

gpu memory usage is 2.86GB
so my gpu and cpu now mine together at over 4.4khash/s!!!!
hero member
Activity: 756
Merit: 502
Something is definitely not working right for me with the 12/18 version posted in the OP.

On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?


Hmm, too bad that a this software beta version has bugs.  Here, have your money back. I award you 0 LTC.

The thing about autotune is that mid range and high end Kepler GPUs dynamically adjust clock rates "as they see fit" to meet thermal and power requirements. And hence there is a certain randomness to the autotuning.

There are Windows machines on which we cannot get 100% GPU utilization. This happened e.g. on a machine on which I installed Windows Server 2012 R2 for evaluation purposes. It would never quite go above 80% GPU use.

Christian
member
Activity: 101
Merit: 10
Miner / Engineer
How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

with -D it prints the stratum difficulty, whenever it changes.  I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number?

Christian


Excellent answer, thanks!  I will use -D from now on to see the changes.

As for requesting every 5 seconds, that sounds perfect.  But again, I'd want to see these changes.  -D sounds like the way to go.
sr. member
Activity: 350
Merit: 250
Ok then so my linux system is mining now. Using the old settings of 16x1 I am getting 3.1khash/s and my cpu is hashing at 0.64khash/s

And one thing to note. No driver crashes in linux! So I may be able to get a higher hash rate then I am now
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.

That, and also background apps stressing the card (even just a little bit) can affect the results and it's also worht noting that overclocking seem to confuse autotune fairly often as well.

I think K7x32 should be best for your card.
member
Activity: 106
Merit: 10
You can try a to add --benchmark -D to see the results it's getting (leave the -l parameter away). Sometimes the results are very close and thus it takes a different one.
full member
Activity: 812
Merit: 102
Something is definitely not working right for me with the 12/18 version posted in the OP.

On my auto-tuned GTX 670 sometimes in Nvidia Inspector, GPU usage is shown as 25 to 35% only, and the hashrate is about 85 Kh/s. Other times GPU usage goes to 95% and above. I don't understand why it isn't fully using the GPU. This version is very buggy and never seems to work the same way twice. The autotune never comes up with the same value twice, either, even if run 2 minutes after it was run before. Is it just picking things at random?
full member
Activity: 168
Merit: 100
I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again.

I am running  K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4)

Anything more than K 13 for me claims it requires to much memory and gives invalid cpu.

So K13x1 is the best for me it seems.
hero member
Activity: 756
Merit: 502
I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

On Linux I've been getting 3.2 kHash on a 660Ti and I am heading for 3.6 kH once I get the -C 2 option going again.

I am running  K7x3 -i 0 -m 1 and strangely this setting is not liked much by Windows (much slower than e.g. K4x4)
full member
Activity: 168
Merit: 100
Quote

holy crap...you just gave me a link to the best most sarcastic site ever.
I WILL use this daily...

PS. All this talk about scrypt-jane is making my windows machines jealous...

Why windows machines???

I've got jane running on my windows machine. My 660ti is about 2.5khs and i still running the rest of my atis on scrypt because they hate jane.

hero member
Activity: 756
Merit: 502
How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

with -D it prints the stratum difficulty, whenever it changes.  I am not aware of a print feature for getwork. When solo mining it should ask for new work from the server like every 5 seconds. Wouldn't that always include a difficulty number?

Christian
hero member
Activity: 756
Merit: 502
Hi-ya Christian,

My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve.  Instead, each instance reports ~200khash/s peak.

This is surprising. What is your launch configuration?
What does GPU-z show for GPU utilization when running just a single instance?
How's GPU utilization and memory usage with 1 and 2 instances?
member
Activity: 101
Merit: 10
Miner / Engineer
The search on this forum software is horrendous... 

Sorry if this has been asked, but Google couldn't find this.

How do you set the difficulty factor manually with cudaminer like you can with cgminer?

I ask because I am solo mining directly against my wallet app and the difficulty dropped significantly today for the coin i was on, but yet it still took nearly twice as long as it should have to find a block.  But I have no way of knowing at what difficulty cudaminer is running at.

cgminer detects new difficulty levels and prints it to the screen and adjusts accordingly.  Does cudaminer have this ability in debug output perhaps?



newbie
Activity: 7
Merit: 0
Hi-ya Christian,

My 660Ti registers at ~270 khash/s peak (scrypt) for a single instance of cudaminer (which appears to start a single mining thread) - By starting two instances, I'd have expected that rate to halve.  Instead, each instance reports ~200khash/s peak.

Is there any efficiency to be gained by running multiple instances of cudaminer on the same card?  Or, am I reading these figures wrong?  Smiley
member
Activity: 84
Merit: 10
SizzleBits
Quote

holy crap...you just gave me a link to the best most sarcastic site ever.
I WILL use this daily...

PS. All this talk about scrypt-jane is making my windows machines jealous...
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
There's also CACHeCoin (CACH) (https://bitcointalksearch.org/topic/anncach-cachecoin-released-based-on-scrypt-jane-400389) and Radioactivecoin (RAD)(https://bitcointalksearch.org/topic/ann-rad-radioactive-coin-scrypt-jane-asic-resistant-exchanges-services-405481) but I don't know much about them.
Rad is completely new, but it's a weird one, couldn't figure out much about it.
newbie
Activity: 34
Merit: 0
Can someone make a short list of other scrypt-jane currencies please? I only heard about QQCoin so far.
scrypt-N parameters included for anyone who wants try them out.

YBCoin  (YBC) Start time: 1372386273 minN: 4, maxN: 30
https://bitcointalksearch.org/topic/ann-ybcoin-will-be-launched-on-000-june-29th-2013-gmt8-243046
Chinese YAC clone, NFactor just hit 14 today, same as YaCoin. YBC has a much higher network hash rate. This is the only other jane coin I've mined, though I've slowly been looking into the others.

ZcCoin (ZCC) Start time: 1375817223 minN: 12, maxN: 30
https://bitcointalksearch.org/topic/annzcczccoin-with-nfactor-12-was-launched-on-aug-8th-btercryptsy-now-268575

FreeCoin (FEC) Start time: 1375801200, minN: 6, maxN: 32
https://bitcointalksearch.org/topic/annfecfreecoinscrypt-jane-powpos-coinno-preminefree-coinorg-269669

OneCoin (ONC) Start time: 1371119462 minN: 6, maxN: 30
https://bitcointalksearch.org/topic/annonc-onecoin-cpu-only-pools-opened-200177

QQCoin Start Time: 1387769316 minN: 4, maxN: 30
https://bitcointalksearch.org/topic/ann-qqcoin-scrypt-jane-asic-resistant-n-factor-multipool-resistant-389238

Memory Coin
https://bitcointalksearch.org/topic/ann-memorycoin-267522
This one apparently uses scrypt-jane but doesn't appear to use it the same way as the others. Couldn't find any start times or min max parameters.

On another note, I picked up a gt 640 today. Best I've gotten out of it on Yacoin is 1.5 kH/s with K6x3. I thought I might be able to push those a little higher, having 4GB and all, but anything more inevitably crashes the driver and halts my system.
sr. member
Activity: 350
Merit: 250
So im now fine for compiling a lot of stuff. I have compiled a version of minerd which runs and gets 0.08 per thread instead of 0.07. But strangely when I run my shell file to launch that in terminal it launches. But cudaminer from the shell file wont. Weird
DBG
member
Activity: 119
Merit: 100
Digital Illustrator + Software/Hardware Developer
DBG are you using the 12-18-2013 binary?

According to https://docs.google.com/spreadsheet/ccc?key=0Aj3vcsuY-JFNdHR4ZUN5alozQUNvU1pyd2NGeTNicGc&usp=sharing#gid=0 people are getting 300+ kH/s with the 660 Ti. Most of them are using K7x32. If you're using a more recent commit from github, try -C 1.

Thanks m8, I'm using the last official release but I am setting things up for nightly builds.  I went and changed my flags to the following "-H 1 -i 0 -l K14x16 -C 0 -m 1" and now I'm finally able to hit the 250kH/s talked about in the readme.  The 250 Ghz boost to the GPU is actually working (overclocking was only working during an interactive/auto start-up) and puts me up another ~30kH/s.  I have a lot more playing around to do but finally sitting down and fully RTFM helped a lot (also with a bit of luck).

Also grazie cbuchner1, thanks for going open-source and being so active with the community Smiley.
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
ah yes. The Kekkac hashing on the CPU is singlethreaded, and non SSE optimized yet.

It's stressing the CPU in a weird way, I haven't really seen anything like it before. If I run 3-4 instances (or 2 with -i 0) I get massive mouse lag and eventually a BSOD about a timed out CPU interrupt while the GPU is almost idling.
Jump to: