Author

Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] - page 1029. (Read 3426921 times)

sr. member
Activity: 350
Merit: 250
So guys, I know my gpu wasnt getting ehat yours was so I did a bit of messing about. As im using a xeon cpu, very low power usage. My cpu is mining at 0.56khash/s and is only using 20w of extra power. So now I am mining at over 3.6khash/s instead of 3.07khash/s all for an extra 20w of power.

Not a bad trade off if you ask me :-)  I am leaving the config running which causes the driver cradh when it initially starts. It is saying its getting work and my gpu is utilized properly. Just hope it is actually doing something and not just saying it is.

Atleast my cpu is going to keep mining at about the same rate. And considering I have another system with a xeon in it that shouldnt use hardly any power they seem an easy run.

40w cpu - 0.58khash
250w gpu - 3khash

Need 5 cpus for the same power as the gpu. Less power usage if you already have them. Although dont forget to factor in the power usage of the rest of the system. My full system is running at 265w, thats my cpu and gpu mining. Plus 2 ssds and 3 hdds. And a full watercooling setup

Just wish solo mining was more reliable. When I first started I got 3 blocks in 7 hours. Right now I havent had any more. So 13 hours with nothing. But I have been doing a lot of messing about with trying to compile etc. So I guess it isnt too bad.

I do think cpu mining may be the only way to go when the Nfactor jumps up. Unless we can find a way to get more power out of our gpus. As even with very high Nfactors, System memory will get better and better. And people will just buy big 12core cpus instead of using gpus.

Hmm I wonder if an amd apu would be any good? Integrated gpu plus cpu. So could mine using both for one cheap cpu. Nvidia gpus are definitely the way to go on this though. Just wish I could try and find a way to use all my gpu memory and get some more power out of it
newbie
Activity: 34
Merit: 0
DBG are you using the 12-18-2013 binary?

According to https://docs.google.com/spreadsheet/ccc?key=0Aj3vcsuY-JFNdHR4ZUN5alozQUNvU1pyd2NGeTNicGc&usp=sharing#gid=0 people are getting 300+ kH/s with the 660 Ti. Most of them are using K7x32. If you're using a more recent commit from github, try -C 1.
DBG
member
Activity: 119
Merit: 100
Digital Illustrator + Software/Hardware Developer
Hi guys, I know a lot of you have access to a 660 Ti so I thought I would just throw this on the table (my exact card is 2GB from EVGA).

I've tried running stock (Windows 7, 16GB ram, no OC) as well as using EVGA Precision X, which allows me to increase the GPU clock as well as Memory clock offsets (I've mostly only increased the GPU +230Mhz and not much else).

The kernel that the software seems to like the most (or at least almost always picks during autotune) is K14x16 (I haven't been Litecoin mining in a while, but when I last did I was using the April 30th release; with the same hardware setup it seemed to like S112x2 then... I recently tried that kernel again but a lot has changed and I received poor results).

Right now my base configuration is "-H 0 -i 0 -l K14x16 -C 0 -m 1".  When I'm running cudaMiner, I'm not using my PC (I might stream a movie using UMS but that is fully CPU-dependant) and in fact, used to run CPU mining at the same time to maximize results (lol I would have never stopped if I didn't make the move to Japan for the 2nd half of 2013; I checked my BTC-e history and I sold so many Litecoins for well under $3 since that's what they were worth at the time x.x).

Does anyone have any suggestions regarding what I could change to possible see a better hashrate (right now I'm seeing 180 - 185kh/s without any OC, and maybe a very small 5kh/s boost with the +230Mhz GPU clock increase).  What I don't understand is that I only see a spike in GPU activity when autotune is testing the waters, then no matter what I do it drops below even the base clock rate.  Is there any way to make it really push the GPU?  I will be careful of course and watch the heat but it has a good cooling.  I know this is a long post but any advice would be great.

Oh one last thing, I haven't noticed this happening at any specific time (it could happen 1h into use or 1d), but cudaMiner sometimes decides to cut its output in half (I can check the speed on my phone and if I notice it's low, I turn on my monitor to check it).  However when I turn on my monitor it climbs back up to the speed it was at when started.  I don't have any power management enabled in Windows or my bios, so I'm wondering if anyone else has run into this.  Again, apologizes for wall of text but if anyone wants to take a crack at any of the aforementioned questions, I would be grateful.

Edit: When using autotune I have 287 for my maximum warps.
sr. member
Activity: 350
Merit: 250
Am I missing out or what? Lmao I have the most expensice 780 you can buy and you guys are just destroying me :-p

I might try 22x1 and see what it gives me. Never know, I might get lucky

Edit: yehhhh, that gives me about 0.9khash/s
I wonder why there is such a high fluctuation on these gpus. I mean I cant max out the memory on my gpu so it cant be a memory limit. And anything over 16x1 seems to just not work

I would love to try it on linux but I can not for the life of me get bloody cuda to install. As the installers are for 12 10 and im using 13.10. Well im using mint 16 to be exact
newbie
Activity: 34
Merit: 0

Currently Linux is better suited to mine on 3GB and 4GB cards... I can run a total of 23 warps on my 780Ti (-l T23x1) currently netting me 3.4 kHash/s with a mild BIOS based overclock. Still, I can get the same kHash/s now with a factory overclocked 660Ti, lol!


I thought the format was "prefix blocks x warps." Autotune reports 14 maximum warps on my 780 Windforce OC but since having unplugged a 2560x1440 monitor I can go up to T22x1, which is now netting me 3.6 kH/s.

Also does anyone have a nice bash script for implementing a backup pool? I'm forced to use a pool rather than solo mine at the moment. I don't trust my yacoin daemon; I had to hack it up pretty bad to get it to compile on OS X, and I seem to have broken the built in mining, which makes me distrust it. I think I need to find a old version of gcc. I tried to mine locally while I was at work yesterday and supposedly found 3 blocks, but they all appear to have been orphans.. yet I can find no reference to them in my log, nor can I find any reference to actually mining in my log. Yacoin log looks a little different than those of other coins so I don't know if I should expect to see anything, but usually there is feedback from connected miners.
sr. member
Activity: 350
Merit: 250
maximum warps is 20  using -l K. and i havent been using -C or -m, should i be?
max warps is 14 using -l T

Edit: just tried -C 0 with the T16x1 and i still get 3.07khash/s and still have the driver crash
tried overclocking my gpu, makes no difference unless i really push up the voltage as +32mv hardly made any difference
hero member
Activity: 756
Merit: 502
i have 16GB of system memory, so is that not enough?

what message for "maximum warps" do you get when autotuning? does that change when autotuning the Kepler kernel ( -l K ) without passing any -m 1 or -C options?

Christian
hero member
Activity: 756
Merit: 502
Shorter scantime leads to less stales. I think the problem might actually be with the rate measurement code, and not so much with the actual hashing rates of the GPU.

Christian

Is there a different effect on solo vs pool mining with different scantime upper limits?
Like, maybe I'm totally off, but would it worth to have lower scantimes for solo mining while we can get away with higher scantime limits in pools, as long as we're not getting boooo'd?

on regular pools, the longpoll (or stratum new block indication) will interrupt your scan when a new block is found

when doing getwork (JSON) based solo mining, there is no longpoll support. So don't make your scantimes too long.
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Shorter scantime leads to less stales. I think the problem might actually be with the rate measurement code, and not so much with the actual hashing rates of the GPU.

Christian

Is there a different effect on solo vs pool mining with different scantime upper limits?
Like, maybe I'm totally off, but would it worth to have lower scantimes for solo mining while we can get away with higher scantime limits in pools, as long as we're not getting boooo'd?
sr. member
Activity: 350
Merit: 250
i have 16GB of system memory, so is that not enough?
hero member
Activity: 756
Merit: 502
T16x1 doesn't crash here, although with it I only get 1.4 kH/s from my 780. 1.9kH is still my personal  highest, using T10x1

Dunno what's causing such a variance in all of our hash rates, lol. I am on Windows 8 though seems most everyone else is on 7 or Linux.

My hash rates from normal scrypt are what's expected for my card I think - getting 530-550 kH/s, so I don't think anything is wrong with CUDA on my machine.

Windows 7 won't allow to allocate 3 GB in one piece, unless you have a lot of system RAM in your machine. There are some limits imposed by the WDDM driver. I will have to add back chunked memory allocation into the Titan kernel to work around this.

Currently Linux is better suited to mine on 3GB and 4GB cards... I can run a total of 23 warps on my 780Ti (-l T23x1) currently netting me 3.4 kHash/s with a mild BIOS based overclock. Still, I can get the same kHash/s now with a factory overclocked 660Ti, lol!
hero member
Activity: 756
Merit: 502
So Yacoin is worth mining?
I don't know, I'm still in the honeymoon phase of bitcoin mining. Smiley


cbuchner1, I disovered a phenomenon regarding scantime (-s):

-s 1:                                                   -s 5:                                                   -s 60:


Is this really affecting the hashrate?
How does higher upper scantime limits bode with pool/solo mining?

Shorter scantime leads to less stales. I think the problem might actually be with the rate measurement code, and not so much with the actual hashing rates of the GPU.

Christian
sr. member
Activity: 350
Merit: 250
i have never overclocked my 780 since new. so i dont know how much i can get out of it.
this system is for gaming aswell so i can push it but have to be able to play games aswell
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Yeah 480 is about what I get at stock from normal scrypt, too. I have my 780 overclocked +200 on the core. Bringing back down to stock on -jane just lowers the hashrate even more

I can get away with waaaay more core overclock with scrypt-jane then I can with scrypt. Like 1200 max for scrypt and 1315 max for scrypt-jane without even memory downclock, and even better, the temps are much lower (62 > 54 ) which means scrypt-jane is quite OC friendly.
sr. member
Activity: 350
Merit: 250
i let mine auto clock, 106% power as i dont have thermal limits etc. and it doesnt even try to use it all. stays the same

well 106% and my gpu core is 1123.5
99% gpu load, and it wont overclock anymore due to "limited by reliability voltage" thats with me leaving it to auto overclock
hero member
Activity: 840
Merit: 1000
What switches are you using?

-H 1 -C 0 -m 0

-i 0 -d 0 -H 1

ManIkWeet did that, just dont know how to use it.
and my 780 using normal scrypt only get 480khash/s or so. no overclocks atall it is all stock, this is under windows 7

Yeah 480 is about what I get at stock from normal scrypt, too. I have my 780 overclocked +200 on the core. Bringing back down to stock on -jane just lowers the hashrate even more
sr. member
Activity: 350
Merit: 250
ManIkWeet did that, just dont know how to use it.
and my 780 using normal scrypt only get 480khash/s or so. no overclocks atall it is all stock, this is under windows 7

in case this means anything to anyone. running the auto tune, when it gets to 14 it stops. it doesnt check anything higher. and anything higher then 14 gives me the driver crash but still runs..... hmmm

seems there is something very funny even using the exact same card model. im guessing by different manufacturers though, mine is a EVGA 780 hydro copper
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
What switches are you using?

-H 1 -C 0 -m 0
hero member
Activity: 840
Merit: 1000
T16x1 doesn't crash here, although with it I only get 1.4 kH/s from my 780. 1.9kH is still my personal  highest, using T10x1

Dunno what's causing such a variance in all of our hash rates, lol. I am on Windows 8 though seems most everyone else is on 7 or Linux.

My hash rates from normal scrypt are what's expected for my card I think - getting 530-550 kH/s, so I don't think anything is wrong with CUDA on my machine.
full member
Activity: 182
Merit: 100
i run it, and just on the first hash attempt the driver crashes and recovers, then it carries on

ok so i had everything installed, can not for the life of me figure out how to clone the git so i am going to give up lol
You can download the whole github as a .zip silly... (to the right bottom of your screen)
Jump to: