Pages:
Author

Topic: nVidia listings on Mining Hardware Comparison (Read 28302 times)

sr. member
Activity: 418
Merit: 250
You can add my stats, I don't want to sign up:

8800GT - 25 MH/s

GTX570 Stock speeds: ~115 MH/s
GTX570 860 MHz (approx) ~ 130 MH/s (approx)
legendary
Activity: 910
Merit: 1000
Quality Printing Services by Federal Reserve Bank
you can also add a 9800GT (no OC)

Code:
|-
| 9800GT || 26 ||   ||   || 1500  || 112 || poclbm-mod.py with -w 64 -f 200 -d 0

I do some work on this PC at the same time. You can probably squeeze more Mh out of it.
 
I tested with the following settings: -w 64 -f 30 -d 0  an it can do 27.2MHash/s

newbie
Activity: 6
Merit: 0
Hi all Smiley

I am new to this mining stuff. Learned about it yesterday and thought that it could be fun to try out.

Now after having fooled a little bit around with both OpenCL miner, and the RPC-Cuda miner (using the GUIMiner v2011-05-01)

I am able to get 83.5-84Mhash/sec. on a GTX 560 Ti (Gainward phantom 2) with the rpcminer-cuda  w/flags: -gpugrid=128 -gputhreads=768

I am running Win7 x64, and Nvidia driver package v270.61

the GFX is all factory defaults in speed: GPU:822.5 MHz / Mem:2004 MHz / Shader:1645 MHz (170W TPD)


Running the cuda miner with flags: -gpugrid=64 -gputhreads=384 which is the GPU's actual texture units/shader cores, yields about 80.5 Mh/s. Why doubling those numbers is working better, I can't explain.?

The OpenCL miner have caused my gfx-driver to stop responding every time the miner is stopped. It usually recovers it self, but one time I had to hard-reset the system to get my screen back. And then it also run's slower than the Cuda.

legendary
Activity: 1284
Merit: 1001
I'm mining with my XFX GeForce GTX 275 at a steady 59 to 61Mhash/s. I underclocked the core graphics clock down to 576MHz and the memory clock down to 775MHz, and overlocked the processor clock up to 1502 MHz.
I assume you have free electricity? Otherwise you're probably losing money.
full member
Activity: 176
Merit: 106
XMR = BTC in 2010. Rise chikun.
I'm mining with my XFX GeForce GTX 275 at a steady 59 to 61Mhash/s. I underclocked the core graphics clock down to 576MHz and the memory clock down to 775MHz, and overlocked the processor clock up to 1502 MHz.
newbie
Activity: 12
Merit: 0
 I'm guessing these values are in fahrenheit.
You would be guessing wrong. I idle around 50 and can hit 90 under load if I push it hard enough.
Mining coins and boiling water at the same time, just 10 more to go  Grin
That should have been written "hoping" rather than "guessing".  My entire post was poorly authored, and is a prime example of why one shouldn't post whilst sleep deprived. I certainly knew better, I've never seen PC temperature sensors represented in anything other than C.  I just didn't like the idea of something in one of my machines running so hot.

I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining).  I'm guessing these values are in fahrenheit.

I didn't know that applying power could cause a card to refrigerate itself to below room temperature!
Are you from opposite land? "Opposite land: crooks chase cops, cats have puppies... Hot snow falls up."1

An increase in temperature (58 to 72) certainly does not imply refrigeration.  The temperature in my room is currently 70 F (21.11 C), but that's probably 'cause it's only 36 F outside (2.22 C).... So, had those values actually represented F, it would have gone from below room temperature to above it. Smiley

nVidia specs this card to run at 69 W.  Also, try to determine if this card runs at nVidia reference clocks.  GPU-Z should do the trick, as well as CPU-Z (Graphics Tab, highest perf level).

Thanks for mentioning the specified wattage, I was procrastinating looking it up on nVidia's site, and your data prompted me to confirm it.  This set my mind at ease, I was worried that running my GPU at a steady 74 C might damage it, but the specification lists a max temperature of 105 C.

The card is running at nVidia's reference clocks (per the linked spec), data obtained using the linux command nvidia-settings:
Code:
$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentProcessorClockFreqs

  Attribute 'GPUCurrentProcessorClockFreqs' (htpc:0[gpu:0]): 1340.
    The valid values for 'GPUCurrentProcessorClockFreqs' are in the range 335 - 2680 (inclusive).
    'GPUCurrentProcessorClockFreqs' can use the following target types: X Screen, GPU.

$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentClockFreqs

  Attribute 'GPUCurrentClockFreqs' (htpc:0[gpu:0]): 550,1800.
    'GPUCurrentClockFreqs' is a packed integer attribute.
    'GPUCurrentClockFreqs' is a read-only attribute.
    'GPUCurrentClockFreqs' can use the following target types: X Screen, GPU.

The 1340 is the processor clock and 550 is the graphics clock, but I'm not sure what the 1800 represents.  The "attribute" GPUDefault3DClockFreqs has the same 550,1800 value(s).

It should also be noted that I have the 512MB model.

In time I'll be setting up PowerTop to confirm the specified wattage, but it is low priority.

I noticed a GPUMemoryInterface "attribute", which has a value of 128.  Do you think it would be advisable to try setting the worksize flag to match this?

 As far as you're -v issue goes, you may have just had a run of bad luck.  Who knows?
Sorry, that was the prime example of poor writing.  When I ran the miner (poclbm-mod as well as poclbm) without the vectors option I saw values between 21567 to 21579.  I did not let the test run long enough to determine if I would still see multiple accepts for a single getwork.  Currently, using the vectors option, the most accepts I have ever seen on a single getwork has been 5.

lol -f 0 does not work on nvidia  Grin
Could you elaborate?  Do you mean that using -f 0 will see no improvements over -f 1?  I've been using -f 0 for quite a while, but didn't pay close enough attention when switching from -f 1 to notice.

Thanks to whomever updated the wiki with the information I provided; I'm guessing it was urizane.
newbie
Activity: 16
Merit: 0
-f 0 is better IMO

lol -f 0 does not work on nvidia  Grin
full member
Activity: 126
Merit: 100
-f 0 is better IMO
member
Activity: 112
Merit: 11
So is there any poclbm flag adjustment to be made for the nVidia's ?
yeah -f1  Cheesy
donator
Activity: 1731
Merit: 1008
So is there any poclbm flag adjustment to be made for the nVidia's ?
hero member
Activity: 575
Merit: 500
The North Remembers
-v kills hash speeds on my 9600gt too. are there any nvidia cards that work with -v on?
newbie
Activity: 55
Merit: 0
i am posting this from memory as my tower is at my parents place.

getting just over 30k Khash/s with 670 core and 999 mem (dont remember shader) on a 8800gt

with stock clocks i think it was some where around 26k Khash/s

using -v destroys the hash rates giving about 17k Khash/s no matter the clocks. Have no idea why this is


newbie
Activity: 56
Merit: 0
Sorry I was using nTune and it doesn't say Shader Clock in there. I have it overclocked now. Riva Tuner says this:

G94 chip 799 Core 1981 Memory 651

GuiMiner says I'm getting around 18.8 Mhash/s now with -w128 -f 10 so I can use my computer. I tried different values for -w but 128 seems to be the best.

OK, I'm guessing 1981 is the shader clock.  Something got a little convoluted in there.  I'll go ahead and put that up.
hero member
Activity: 575
Merit: 500
The North Remembers
Sorry I was using nTune and it doesn't say Shader Clock in there. I have it overclocked now. Riva Tuner says this:

G94 chip 799 Core 1981 Memory 651

GuiMiner says I'm getting around 18.8 Mhash/s now with -w128 -f 10 so I can use my computer. I tried different values for -w but 128 seems to be the best.
newbie
Activity: 56
Merit: 0
I'm using poclbm_gui with -w128 -f 1 flags and stock settings of 675/900. I set my card to 800/650 and I see 20.1 Mhash/s but it still bounces around so it is hard to get an exact reading.

OK, well, I'll put up the two stats you have for your 9600 GT, but I'm going to leave the wattage field blank without the specific GPU chip that's on that board.

Actually, I'll need the shader clocks for both of those settings to make a good entry into the table.  There are three clock speeds for these types of boards.  It appears that you've passed me the core and RAM clocks, but not the shader clocks.  Stock for a reference 9600 GT is 650/1625/900 core/shader/RAM clocks.  Also, I'm still going to need the specific GPU chip if I'm going to add to the wattage and Mhash/W fields.
hero member
Activity: 575
Merit: 500
The North Remembers
I'm using poclbm_gui with -w128 -f 1 flags and stock settings of 675/900. I set my card to 800/650 and I see 20.1 Mhash/s but it still bounces around so it is hard to get an exact reading.
newbie
Activity: 56
Merit: 0
add my former card

Nvidia GTX560 Ti, factory overclocked to 900/2000

86700 hash/s

win7 x64 and RPC Miner CUDA

OK, well, I guess I'll add it.  Some more info wouldn't hurt, but there seems to be just enough here.

I'm running an 8600GT at 7.3Mh/s
2Mh/s better than the ones posted on the wiki  Grin
GPU shark says 43 watts so it's 0.169 Mhash/W if thats correct
using poclbm with -w 128
1602 MHz shader clock

The wattage is going to be off by a pretty good amount.  If you go up by one model, the 8600 GTS has the same GPU clocked at 675 MHz core/1450 MHz shader clocks vs. the 8600 GT at 540 MHz core/1180 MHz shader clocks.  The 8600 GTS is listed at 75 W.  If your shader clocks are 1602 MHz, you're probably in that 75W or greater ballpark, which reminds me...

I idle around 50 and can hit 90 under load if I push it hard enough.
Mining coins and boiling water at the same time, just 10 more to go  Grin

The cooler for your card was designed with a 47 W TDP in mind.  Pushing those clocks may not kill your card today or a month from now, but it will definately shorten its life.  Your card being an 8600 GT, you may not be all that concerned, anyhow.  I'll add it for now, but let me know if you are running a different clock than the one you have posted.

My computer jumps between 18.7 and 16.8Mhash/s on my 9600gt

More information would be helpful.  Is this card running at the nVidia reference clocks?  What miner are you using?  What command line arguments are you giving it?  (omit username and password, obviously)  The speed variance your seeing is somewhat disturbing.  Is this happening while you're not using the system or is 18.7 MHash/s consistent while you're not using the system?  Also, fire up GPU-Z to determine which specific GPU you have, as there were two versions of the 9600 GT, one based on the 65nm G94 and another based on the 55nm G94b.  I've seen wattage specs for the 65 nm part, but haven't run across wattage specs for the 55 nm part (which would be lower).

I'm using a GT 240.

Using python poclbm-mod.py -d 0 -f 0 -a 10 -v -l, I regularly see between 21230 and 21255 khash/s.  I've also noticed that -v (vectors) costs about 300 khash/s (21567 to 21579), but I think there was an increase of invalid/stale shares --- this may have been due to other factors, however.  I also think there was a reduction in discovering multiple shares per getwork without vectors, but that may have been due to the same 'other factors'.

I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining).  I'm guessing these values are in fahrenheit.

nVidia specs this card to run at 69 W.  Also, try to determine if this card runs at nVidia reference clocks.  GPU-Z should do the trick, as well as CPU-Z (Graphics Tab, highest perf level).  As far as you're -v issue goes, you may have just had a run of bad luck.  Who knows?
member
Activity: 112
Merit: 11
not very good with Computer power/W though
Yeah but at least the beer is cold.
and in the end which is more important  Grin
full member
Activity: 126
Merit: 100
I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining).  I'm guessing these values are in fahrenheit.

I didn't know that applying power could cause a card to refrigerate itself to below room temperature!

I already saw a computer INSIDE a mini-refrigerator Cheesy not very good with Computer power/W though as the refrigerator was basically working at 100% load all the time
sr. member
Activity: 406
Merit: 250
I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining).  I'm guessing these values are in fahrenheit.

I didn't know that applying power could cause a card to refrigerate itself to below room temperature!
Pages:
Jump to: