Pages:
Author

Topic: Does underclocking reduce power consumption? (Read 9985 times)

full member
Activity: 187
Merit: 100
October 04, 2011, 12:27:33 PM
#28
Just get used to sleeping with the fans at full speed. Do it like this: every evening, increase the fan speed by 1% or so from the previous night, and soon you won't notice that the fan is running at full speed.

I started putting in earplugs before I go to sleep. The first night I tried this, I thought this was a pretty clever idea until the next morning when, not only the noise of my mining rigs was blocked out, but also the sound of my alarm going off.

If you try this, remember to make the appropriate adjustments to your alarm.
legendary
Activity: 1134
Merit: 1005
Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.

So if you can mine with your cards at 90° with a smooth and stable system, there's nothing to worry about.
Agree.
I used to run my 9800GX2 at 105C for 2 yrs, and nothing happened.
hero member
Activity: 774
Merit: 500
Lazy Lurker Reads Alot
Well i have my fair share on nvidia and ati cards i even had some 4890 cards running and those where extreme hotheads
We measured at some days even 105 c which is even for that card a considerable heat but it actually never failed
Nevertheless when i contacted XFX about these cards running such high temps they offered to rma the cards
I talked alot with XFX and EVGA about maximum temps of certain models and the rule is for a videocard its ok to run around 90c for longer periods with some peaks to about 100c but this should not last long, the cooler you can keep your cards the longer they will live.
Another thing even though some think it is not good to have very high temps on some spots on motherboards/videocards there are actually chips who are made to run at extreme heat.
I remember a chip near the northbridge running a constant 140c asking the builder confirmed it was designed to even run at 180c so even though it scared the hell out of us its is not allways a bad thing.

Now back to videocards the most important rule i have about them is keep them clean, basically i allways clean the cards fan every month from dust and use canned air to blow dust of the internal heatsink so it keeps having a free airflow.

Now my experience is that i get a good drop in temps when you run the ram at low speeds 300Mhz will do fine on most modern cards and cause they are almost all gddr5 do keep at low speeds enough power left to run smooth.
And to answer your question, yes dropping the core speed will reduce some of the power consumption also

Now on the fact that the nvidia lovers like to say those cards are better then amd are simply wrong. Amd its cards has much higher quality components then the nvidia counterparts no matter what some say.
Many 580 en 590 burned out with not so super high temps, this did not happen to any ati card as far as i know.
The reason why amd also wanted a limit on the top models is simple.
When overclocking these cards and especially the dual gpu it is ofcourse asking for a problem.
Keeping a single model is already hard enough to cool.
To cool such dual monster as the 5990, 6990 or nv590 is a pain in the behind for all designers.
Its extreme difficult to get the heat out of the card, because of the cramped space.


newbie
Activity: 58
Merit: 0
Before I broke my kill-a-watt, I did experiment with this a bit. From my messing around I found that my Sapphire 5830's at 980 core and 300 mem (800 and 1000 stock), actually used less power than at stock speeds! It was only about 10 watts, but its still great news for efficiency with those cards. Depending on the type of cooler your card has this could also effect core temperatures, but on my 5830's the cooler does not touch the ram, so it did nothing for my core temperatures. This probably wont hold true for all cards, but in my experience, it is worth it to downclock your memory, since the lower power consumption gives you extra core overclocking headroom while (probably) staying within the rated TDP of the card and its cooler.
sr. member
Activity: 378
Merit: 255
Do you have a FLIR camera!?!? If so, you could answer some very interesting questions about airflow.
newbie
Activity: 50
Merit: 0
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)
But then I didn't say it has no effect. Why do you have to pretend I say something different from what I do all the time? I could just as well say that by your logic, if the GPU temperature is 90C then the temperature of the entire room the computer sits in will also be 90C.
I'm sorry if I gave you that impression. I didn't mean it that way.

But here:
http://www.hardware.fr/articles/825-4/dossier-nvidia-repond-amd-avec-geforce-gtx-590.html

These are heat measured by infrared, should be good enough to illustrate my point.
http://i54.tinypic.com/2heiivt.jpg
This nvidiacard has VRM overloaded at +110C, and its surroundings are +90C.
The core on the right is measured to 84C, but its surrounding pcb area also goes +90C.
And to the far left of both cores you can see a brown spot, that's heat that most likely came from air flow. And of course the motherboard, it takes a lot of heat from the video card too, maybe not so much in this picture(only orange), but in the website I linked to is another image, the motherboard takes up a significant amount of heat(brown).

We shouldn't underestimate the heat that the PCB and air flow can transfer.
sr. member
Activity: 742
Merit: 250
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)
But then I didn't say it has no effect. Why do you have to pretend I say something different from what I do all the time? I could just as well say that by your logic, if the GPU temperature is 90C then the temperature of the entire room the computer sits in will also be 90C.

if no heat is able to escape your room, that's true. well, not quite, because you'll have a constant delta of extra energy being transfered into your system, so after a while your temps will be even higher. i hope that makes sense. temperature = energy.
legendary
Activity: 1284
Merit: 1001
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)
But then I didn't say it has no effect. Why do you have to pretend I say something different from what I do all the time? I could just as well say that by your logic, if the GPU temperature is 90C then the temperature of the entire room the computer sits in will also be 90C.
newbie
Activity: 50
Merit: 0
Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
No, I'm not. The source of the heat is not relevant, and of course staying cooler would always be safer.
You're not making any sense, the source of the heat is very relevant. The snow ball effect, go read up on it, increased temperature -> causes increased leakage -> causes increased power consumption -> causes increased temperature -> causes increased leakage -> causes increased power consumption -> ....
I believe I have spelled this out in the simplest way possible for you. This is an issue that all chips from TSMC's 40 nm fab has, no matter if you're named AMD or Nvidia.

We weren't talking about VRMs, I'm well aware that they get much hotter. On one of my cards it's 109 right now.
You're wrong, we are. We're talking about video cards supposedly "built to last" a certain temperature, and VRMs are very important parts of this, and AMD's reference design VRMs are of significantly higher quality than Nvidia's. You will see in the reference design of a small gpu like the HD5770, even that small one has VRMs that are higher quality than a gtx470.
This is made possible thanks to the size of the chips, AMD's chips are physically smaller, for example a HD5870 is physically smaller than a gtx460. At the same time the HD5870 is also much faster. This causes the value of the 5870 to increase (higher quality components can be afforded in the reference design), while Nvidia has to skimp on component quality to have any sort of profit. I hope you understand that, AMD's reference design components are of higher quality.

4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.
Even though silicon is generally silicon, that doesn't mean that some designs can't be more resistent to heat problems than others.
When the manufacturing process is the same, then you're wrong. This isn't an intel vs amd deal here, this is a tsmc vs tsmc deal. It's the same shit, so to speak, the chips both come from TSMC, they both share the same issues.

Which is probably why I said *most*...
....
Of course you can. If the component doesn't touch the heatsink or the GPU and it doesn't create a lot of heat on it's own, it will be cooler than the GPU. Wheather or not it touches the heat sink, it will be cooler the further away from GPU you get. The designers of graphics card aren't stupid, and take this into account.
Don't be naive, the designers of custom boards will prioritise cost savings more than anything else, (unless we're dealing with extreme editions like MSI lightning, ect.)
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)

As a matter of fact, the Video card itself (!!) has an effect on the motherboard component's temperature.
legendary
Activity: 1284
Merit: 1001
Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
No, I'm not. The source of the heat is not relevant, and of course staying cooler would always be safer.

3. I've seen VRM components on HD5970 cards exceed 100 degrees, if anything, it's the digital VRMs on reference high-end ATI cards that are of higher quality and higher tolerances than Fermi's cheap low cost circuitries.
We weren't talking about VRMs, I'm well aware that they get much hotter. On one of my cards it's 109 right now.

4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.
Even though silicon is generally silicon, that doesn't mean that some designs can't be more resistent to heat problems than others.

The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
Actually some components can be having even higher temperature than the core.
Which is probably why I said *most*...

Videocard coolers are not magical, they will transfer the heat away from the core, but you can't guarantee that that same heat won't be transferring directly into the other components.
Of course you can. If the component doesn't touch the heatsink or the GPU and it doesn't create a lot of heat on it's own, it will be cooler than the GPU. Wheather or not it touches the heat sink, it will be cooler the further away from GPU you get. The designers of graphics card aren't stupid, and take this into account.
newbie
Activity: 50
Merit: 0
Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
2. I've lurked around enough to know that some of the Fermi distributors have been voiding user warranties for ridiculous reasons like "too much dust on your vga", even though the user had been cleaning the dust already before sending the RMA. If that's not a sign of desperation, then....
3. I've seen VRM components on HD5970 cards exceed 100 degrees, if anything, it's the digital VRMs on reference high-end ATI cards that are of higher quality and higher tolerances than Fermi's cheap low cost circuitries.
4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.

The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
Actually some components can be having even higher temperature than the core.
Videocard coolers are not magical, they will transfer the heat away from the core, but you can't guarantee that that same heat won't be transferring directly into the other components.
hero member
Activity: 590
Merit: 500
underclocking will reduce power consumption, as the switching current of a CMOS gate is proportional to the frequency.

undervolting will save you more, as power is proportional to the square of the voltage.

the equation is roughly (ignoring a couple constants not relevant to this discussion) Power=frequency*voltage^2
legendary
Activity: 1284
Merit: 1001
Nope, it's not true at all. ATI fans used that same excuse back when the 4870/4850 generation ran hot. Roll Eyes
Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.

They are all excuses, no card should ever exceed 85 Celcius as that is usually the limit for the capacitors and other components on the card.
The temperature measurement is for the GPU, most of the other components will have a much lower temperature.
newbie
Activity: 50
Merit: 0
Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.

Yes, you might be right when you look at the temps of the original GTX 470 and 480 at full load.   Smiley
Nope, it's not true at all. ATI fans used that same excuse back when the 4870/4850 generation ran hot. Roll Eyes

They are all excuses, no card should ever exceed 85 Celcius as that is usually the limit for the capacitors and other components on the card.
Furthermore, the 40 nm process from TSMC is particularly bad at leaking more power the higher the temperature, so it's like a bad snow ball effect.

When you exceed 90 Celcius like the gtx480, it just means that you're desperate to retain the performance crown at all cost.
full member
Activity: 173
Merit: 100
Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.

Yes, you might be right when you look at the temps of the original GTX 470 and 480 at full load.   Smiley
legendary
Activity: 1284
Merit: 1001
Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.
Nvidia cards are built to run at a higher temperature than ATI/AMD.
legendary
Activity: 1386
Merit: 1004
I under volt my 6990 to 1.075 from stock which I think is 1.175 and it saves about 30 watts but really keeps the noise down.  I run at 830 not at 860+ that would need more then 1.250 volts.
hero member
Activity: 812
Merit: 502
I bought an Energy Meter to see how much electricity my rig uses, so:

4x XFX 5870 @ 960Mhz Core & 300Mhz Memory
Gigabyte GA-770T-D3L
AMD Athlon II X2 250
2GB 1333MHz DDR3
USB Flash Drive
3x CM Sickleflow @ 2000rpm
===================================
800-815W from the Socket depending on the time of the day
830W Peak

So the components use around 730W when the efficiency of the PSU is taken into account.
member
Activity: 68
Merit: 10
Yeah, I read on another board from an Nvidia expert saying that as long you keep your temps below 100° and you notice that your system is still stable, these hot temperatures are no problem.

So if you can mine with your cards at 90° with a smooth and stable system, there's nothing to worry about.
full member
Activity: 173
Merit: 100
It really comes down to the quality of the card's components. I have a 3850 and a 4850 both overclocked running BOINC for almost 2 years (Collatz, Milkyway, DNETC, Primegrid) at 90+ degrees. Still fine, no artifacts or whatever. If ever bitcoin flops later on, my newer cards will join their older brothers.
Pages:
Jump to: