Author

Topic: GeForce GTX 900-series power consumption discussion thread (Read 3093 times)

sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
If your power is expensive, mine the lyra2v2 algo. my 750ti's are only using 40 watt each
member
Activity: 98
Merit: 10
Thanks for this. I've been wondering exactly what my 970's are pulling while they keep the office warm.  Grin
hero member
Activity: 658
Merit: 500
If you are mining with nVidia GPUs, this thead might be of interest for those looking for GPUs with nice discounts: https://bitcointalksearch.org/topic/wtn-interested-in-buying-brand-new-hddslaptopsgpus-w-great-discount-old-1207919  Smiley
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Just replied to your PM, @ltc_bilic.  I see you are overclocking the memory quite a bit on those Gigabytes.  I think I have the same cards as you, but I only overclock the GPU, not the MEM.  We'll see if it makes a difference.

Overclocking the memory never gave me any noticeable increase in hashrate in any modern algos. I'm curious if you can achieve a higher hashrate with it though.

Oh, and the 3rd Gigabyte GTX 750 Ti just died on me last week (with the same symptoms).
sr. member
Activity: 427
Merit: 250
Just replied to your PM, @ltc_bilic.  I see you are overclocking the memory quite a bit on those Gigabytes.  I think I have the same cards as you, but I only overclock the GPU, not the MEM.  We'll see if it makes a difference.
member
Activity: 130
Merit: 10
bathrobehero I couldn't agree more with you. And to support my claim that these cards are not built for higher voltage,...the last two cards have died even though their temp. never surpassed 45C, because I have them in a 16-18C environment temp and additional fans in my open air case. During winter they were clocking at 33C now at 45C overclocked +40 gpu and +248 mem, and they still died. I have EVGA (+76 gpu, +210) not FTW card which runs flawlessly from the start. So I would avoid flashing Gigabyte cards at all cost.
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
I've always wondered what are exact power consumptions bathrobehero really great idea, to look into the firmware. So then the 970/980 are not as efficient as it appears at first sight.

And for anyone else into thinking of flashing the 750Ti, unlocking the extra "power" - don't. I've done it on all my cards 14 of them (Gigabyte Windforce) and had most of them replaced. They are just not built for a higher voltage,...2 of them died just recently have to RMA. Perhaps flashing the newer generation of card is another story, but I don't advise it unless you have spare money/time to throw around. If someone is going to experiment with flashing I'm all in for undervolting - to achieve even better efficiency.

I bought 6 x GV-N75TOC-2GI cards almost exactly a year ago and 2 of them already died on me.
When the first died I replaced the stock BIOS on all of them but a second one died anyway and is in the process of RMA.
They both bricked completely (no fan spinning, nothing). I thought they had solid build quality with plenty of headroom because these have 36 months of warranty.

I also have ASUS (GTX750TI-PH-2GD5) and MSI (N750Ti-2GD5/OC) cards, none of them have additional 6-pin connectors and all the cards have big differences in how much OC they can handle while being stable. A few of them can handle +160 Mhz with most algos while a couple of derpy ones (both MSI) crash even at around +70 Mhz.
So I pushed 60W BIOSes on them as well but it didn't help much with the derpy cards so I reverted them back to stock BIOS with only disabling boost for more consistency.


On another note I did a somewhat useless comparison. Useless because I can't be arsed for now to try each brand with different efficiency PSUs so I just did a quick comparison and the rig with the MSI cards is on a bronze PSU (XFX Pro Series 650W) and is pulling ~9% more from the wall than the ASUS rig does on a gold PSU (EVGA SuperNOVA 850W G2). But then again the MSI cards are 62-65°C while the ASUS are 55-59°C doing the same hashrate with same OC and very close fan speeds (+/- 10%) so it could be just the cards.

If the 9% difference would be solely because of the difference in PSU efficiency (which more than likely: https://en.wikipedia.org/wiki/80_Plus#Efficiency_level_certifications)
then with more investigation it might be worth it to buy really high efficiency PSUs and use 900-series cards.
member
Activity: 130
Merit: 10
I've always wondered what are exact power consumptions bathrobehero really great idea, to look into the firmware. So then the 970/980 are not as efficient as it appears at first sight.

And for anyone else into thinking of flashing the 750Ti, unlocking the extra "power" - don't. I've done it on all my cards 14 of them (Gigabyte Windforce) and had most of them replaced. They are just not built for a higher voltage,...2 of them died just recently have to RMA. Perhaps flashing the newer generation of card is another story, but I don't advise it unless you have spare money/time to throw around. If someone is going to experiment with flashing I'm all in for undervolting - to achieve even better efficiency.
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Gigabyte 750Ti Windforce OC on quark, stock clock, with an 80+ gold PSU, consumes exactly 60W from the wall.
I can't really play with overclocking or power target since my rig is running linux and nvidia removed the coolbits option from the linux driver. Didn't find a way to control that on linux yet.
If someone has a solution that works from the command line, PM me. I'm ready to offer 48 hours of mining (4 GTX 750 Ti) to the pool of your choice :-)

It doesn't work from the command line, but I used to have 5 750Tis. I would reboot into DOS and flash 'em with a modded BIOS.

Thanks. My main reservation with this solution is I will probably have to flash them A LOT before I find optimal settings and I'm afraid the  bios will just die before I'm done :-/
The offer still stands. Forget nvidia-smi, it does not work with low-end cards...

You could boot up a windows, find a suitable OC which works with every algo (meaning not too high), note the frequencies and the voltage and flash a BIOS with those settings.

I wonder if you can switch between power states in Linux because if so, you could potentially use different power states as different OC profiles.
full member
Activity: 139
Merit: 100
Gigabyte 750Ti Windforce OC on quark, stock clock, with an 80+ gold PSU, consumes exactly 60W from the wall.
I can't really play with overclocking or power target since my rig is running linux and nvidia removed the coolbits option from the linux driver. Didn't find a way to control that on linux yet.
If someone has a solution that works from the command line, PM me. I'm ready to offer 48 hours of mining (4 GTX 750 Ti) to the pool of your choice :-)

It doesn't work from the command line, but I used to have 5 750Tis. I would reboot into DOS and flash 'em with a modded BIOS.

Thanks. My main reservation with this solution is I will probably have to flash them A LOT before I find optimal settings and I'm afraid the  bios will just die before I'm done :-/
The offer still stands. Forget nvidia-smi, it does not work with low-end cards...
full member
Activity: 139
Merit: 100
Gigabyte 750Ti Windforce OC on quark, stock clock, with an 80+ gold PSU, consumes exactly 60W from the wall.
I can't really play with overclocking or power target since my rig is running linux and nvidia removed the coolbits option from the linux driver. Didn't find a way to control that on linux yet.
If someone has a solution that works from the command line, PM me. I'm ready to offer 48 hours of mining (4 GTX 750 Ti) to the pool of your choice :-)
legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
Nice!

This is what lead me to start looking into the power consumption:
A bit off topic but I picked up a GTX 970 (Gigabyte Windforce 3x) only to realise the TDP of this card is not 145W but 250W. With stock BIOS, 100% power target and +160 Mhz core OC it draws 234 watts from the wall mining groestl (which seems to be the hungriest non-scrypt algo) on a 80+ gold PSU. It seems its efficiency plateaus around 40-60% power target limit depending on algo:



That is a 68-75% decrease in efficiency depending on algo compared to stock settings which is huge! This is important when profits are barely above electricity, for example if you pay $0.14 per kwh for electricity, going with yaamp x11 payout figures (which are not very profitable) one of these cards would earn 0.00371 BTC on stock settings while it would earn 0.01652 BTC after electricity at 40% limited power target in a month. On the other hand with more profitable coins/algos like quark, it's worth it to go full speed and overclock because it would end up earning more (0.08967 BTC on stock vs. 0.13118 BTC overclocked in a month). I thought it was quite interesting.
hero member
Activity: 644
Merit: 502
bathrobehero has a valid point;  a new thread on this stuff, so that we don't flood sp_'s ccminer thread.

So, here we go--->

To get this thing started, I will quote/cross-post from the previous thread:


970 (GV-N970WF3OC-4GD - 250w OC edition instead of 145w):
stock - 2.75 mh/s at 187W
oc - 3.0 mh/s at 208W (+185/0 - 1501mhz)


I had assumed the 970 TDP would be within the 145W Nvidia spec except if OCed. It seems that
assumption was way off. I didn't find an actual TDP spec for this card, just PSU and connector reqs.
Is it really 250W? This changes the balance of power (bad pun) and makes me wonder about the 980
rated at 165W.

I am by no means going to sit here and write as if I am an authority on this stuff.
However, I have taken note of something that I think deserves mention and discussion.

TDP is the term we all seem to use (myself included) to refer to and to determine the power consumption of a GPU.
But, TDP is  short for Thermal Design Power. Further definition is:
"TDP is the average power a device can dissipate when running real applications." (aka "normal" apps that an average user would run)

TDP, then, is NOT exactly equal to the device's maximum power consumption, nor is it necessarily measured at 100% load.

TDP is self-reported by manufacturers. It seems that several years ago, AMD & Intel used different percentages of processor loading to measure and report their CPU's TDP.
AMD used ~100% load and intel something like 80-85%. Well, intel's method seems to have become the norm.
This is what led to the definition of: average power dissipated when running "real applications."

So, under intense loading situations, a device can definitely consume more power and dissipate more heat than its TDP would indicate.
Of course, overclocking raises the amount of power consumed. And mining intensive algorithms certainly is not what would be called "real applications."

From http://www.cpu-world.com/Glossary/M/Minimum_Maximum_power_dissipation.html:
TDP
TDP

Yes, it's not how much a card can pull but TDP does correlate fairly well to that amount (at least it did until the 900-series) and we don't have better figures from the specs. (Linus' explaination: https://www.youtube.com/watch?v=yDWO177BjZY )

I downloaded some random stock BIOSes from here and opened them in Maxwell II BIOS Tweaker to check their maximum TD.. I mean power consumption using this image to translate the figures and here are the results (in watts):

Code:
Bios				Flavor		Maximum	Target
Asus.GTX970.4096.141028 Strix OC 250 163.46-193.152
EVGA.GTX970.4096.141020 FTW 250 170-187
Galaxy.GTX970.4096.140912 EXOC 200 200-250
Gigabyte.GTX970.4096.141105 Windforce OC 250 250-280
Gigabyte.GTX970.4096.141910_1 G1 Gaming 250 250-280
MSI.GTX970.4096.141029 Gaming 250 200-220
NVIDIA.GTX970.4096.140826 Reference? 250 151.2-160.3
Palit.GTX970.4096.140903 Standard 250 151.2-160.3
Palit.GTX970.4096.140910 JetStream OC 250 180-200
PNY.GTX90.4096.140912 VCGGTX9704XPB 250 151.2-160.3
Zotac.GTX970.4096.141024 Standard 196 151.2-160-3
Zotac.GTX970.4096.141910 AMP Omega 350 325-345
...
Gigabyte GTX 980 G1 Gaming 250 300-366
Gigabyte GTX 980  Windforce OC 250 270-300

I added two 980 (CBA to add more) to the list to show the figures does seem to be correct - at least compared to a stress test from a Tom's hardware review:



Ps: we should really have more threads instead of flooding this one with everything

Jump to: