Author

Topic: GPU's temperature and its relationship with power consumption (Read 5235 times)

newbie
Activity: 33
Merit: 0
If any of you have a kill-a-watt and some spare time, please help doing test:

-With cgminer, set target temp 65oC, auto fan , record the wattage (average for 1-2 minutes)

-Repeat with target temp 75oC

member
Activity: 70
Merit: 10
If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.

Je suis d'accord! There are much greater efficiency gains to be had by undervolting and underclocking, but if you really want to shave off the last few watts possible, then you can ramp up those fans. I personally keep the temps acceptable and the fans low, just so my secret rigs in the cupboards don't get discovered  Wink
donator
Activity: 1218
Merit: 1079
Gerald Davis
If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)

True but one can also run  a card cooler by lowering clocks.  Lower clock enough and you can also lower the voltage.

I can run a 5970 @ 40% fan and <60C  but only at 535MHz and 0.7V Smiley

Right now my power costs even with "hot GPUs" (~70C) are only about 1/3rd of the revenue so increased efficiency is mostly academic however as the network becomes more efficient (7900 series cards, FPGAs, etc) things like lower temps, undervolting, underclocking can be used to increase the "effective economic lifespan".  When my 12GH/s farm is no longer economical I can "convert it" to a 6GH/s farm which is economical and grind out maybe another years worth of revenue.
member
Activity: 70
Merit: 10
If you want to maximise the efficiency of your cooled mining card, there'll be an equilibrium point where a 1W increase in power to your cooling solution (probably a fan) would result in a 1W reduction in the power draw of your card. At this point it's not worth cooling the card down further (unless you go for a more efficient cooling technology)
hero member
Activity: 533
Merit: 500
^Bitcoin Library of Congress.
newbie
Activity: 33
Merit: 0
We all know that the high temperature, the higher current leakage in the transistors. But the question is how much higher?

Suppose same GPU with constant clock, fan, workload...; at 80oC it consumes more energy than when it is at 70oC. It is because of current leakage at higher temp is much higher, VRM efficiency decrease with VRM temp

There is a interesting article here:

http://www.techpowerup.com/reviews/Zotac/GeForce_GTX_480_Amp_Edition/27.html

"so for every °C that the card runs hotter it needs 1.2W more power to handle the exact same load."


Have you ever measure your card to see how much additional power does it take when the temp increase 1oC?

Jump to: