I'm guessing these values are in fahrenheit.
You would be guessing wrong. I idle around 50 and can hit 90 under load if I push it hard enough.
Mining coins and boiling water at the same time, just 10 more to go
That should have been written "hoping" rather than "guessing". My entire post was poorly authored, and is a prime example of why one shouldn't post whilst sleep deprived. I certainly knew better, I've never seen PC temperature sensors represented in anything other than C. I just didn't like the idea of something in one of my machines running so hot.
I don't know what the wattage draw is, but I have noticed the temperature of the card jump from 58 (idle) to 72 (mining). I'm guessing these values are in fahrenheit.
I didn't know that applying power could cause a card to refrigerate itself to below room temperature!
Are you from opposite land? "Opposite land: crooks chase cops, cats have puppies... Hot snow falls up."
1An
increase in temperature (58 to 72) certainly does not imply refrigeration. The temperature in my room is currently 70 F (21.11 C), but that's probably 'cause it's only 36 F outside (2.22 C).... So, had those values actually represented F, it would have gone from below room temperature to above it.
nVidia specs this card to run at 69 W. Also, try to determine if this card runs at nVidia reference clocks. GPU-Z should do the trick, as well as CPU-Z (Graphics Tab, highest perf level).
Thanks for mentioning the specified wattage, I was procrastinating looking it up on nVidia's site, and your data prompted me to confirm it. This set my mind at ease, I was worried that running my GPU at a steady 74 C might damage it, but the
specification lists a max temperature of 105 C.
The card is running at nVidia's reference clocks (per the linked spec), data obtained using the linux command nvidia-settings:
$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentProcessorClockFreqs
Attribute 'GPUCurrentProcessorClockFreqs' (htpc:0[gpu:0]): 1340.
The valid values for 'GPUCurrentProcessorClockFreqs' are in the range 335 - 2680 (inclusive).
'GPUCurrentProcessorClockFreqs' can use the following target types: X Screen, GPU.
$ nvidia-settings --display :0 -q [gpu:0]/GPUCurrentClockFreqs
Attribute 'GPUCurrentClockFreqs' (htpc:0[gpu:0]): 550,1800.
'GPUCurrentClockFreqs' is a packed integer attribute.
'GPUCurrentClockFreqs' is a read-only attribute.
'GPUCurrentClockFreqs' can use the following target types: X Screen, GPU.
The 1340 is the processor clock and 550 is the graphics clock, but I'm not sure what the 1800 represents. The "attribute" GPUDefault3DClockFreqs has the same 550,1800 value(s).
It should also be noted that I have the 512MB model.
In time I'll be setting up
PowerTop to confirm the specified wattage, but it is low priority.
I noticed a GPUMemoryInterface "attribute", which has a value of 128. Do you think it would be advisable to try setting the worksize flag to match this?
As far as you're -v issue goes, you may have just had a run of bad luck. Who knows?
Sorry, that was
the prime example of poor writing. When I ran the miner (poclbm-mod as well as poclbm) without the vectors option I saw values between 21567 to 21579. I did not let the test run long enough to determine if I would still see multiple accepts for a single getwork. Currently, using the vectors option, the most accepts I have ever seen on a single getwork has been 5.
lol -f 0 does not work on nvidia
Could you elaborate? Do you mean that using -f 0 will see no improvements over -f 1? I've been using -f 0 for quite a while, but didn't pay close enough attention when switching from -f 1 to notice.
Thanks to whomever updated the wiki with the information I provided; I'm guessing it was urizane.