Pages:
Author

Topic: How long has your GPU lasted mining 24/7? - page 3. (Read 27734 times)

member
Activity: 118
Merit: 10
I have 3 6970 (80~85) and 1 6850 (68~72). Temps are OK? , The 6970 are new- 1 week and they have high temps is that normal?
legendary
Activity: 1022
Merit: 1000
BitMinter
More than two years of mining with my 5850s @ 70 C
member
Activity: 87
Merit: 10
83 - 7950s?  sheesh man..
legendary
Activity: 3206
Merit: 1069
they can't die with proper cooling, and therefore low temp and voltage, and even if this happen, you would have already replaced them
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
i've had assorted cards running since around mar '11

never had a 5830 or 5870 die (~30?), two of my 5970's (out of ~10) had one core go bad
legendary
Activity: 3577
Merit: 1090
Think for yourself
5830 for 2 years and 2 months.
5770 2 years and 1 month.

I just retired them both last month after getting Block Erupters.  Both GPU's are still functional.
Sam
legendary
Activity: 868
Merit: 1000
ADT developer
I have had 83 7950s running for 3 months at max tems of 60c core 80c vrm no dead card yet  Smiley
sr. member
Activity: 333
Merit: 250
Every single card I've run around 70 deg C has lasted over 2 years.  Mostly 5770s and 5830s from the early GPU days.  The ones I let go into the 80s are all dead for various reasons.
hero member
Activity: 546
Merit: 500
This is a complex argument that depends on so many factors. For example, the quality of the components that the card vendor used when the card was assembled following the reference designs, and again is different if they've changed the reference design. To make it really complex, the quality of the wafer the die is made from, and then the quality of the fabrication itself of the GPU core and some of the other components will have various impacts on how many of the shipped products fail. There are always going to be unavoidable elemental contaminants in the silicon and there will be some physical flaws with the die after fabrication, in every chip. The QA testing of silicon and the final fabricated processor is its own little expensive industry that you could study for 10 years at university and get a PhD in and yet still not grasp all of what is going on.

This is continued onto all of the hundreds of subcomponents of the card, even the tiny SMD resistors and capacitors that are all over the cards.

Of course though the parts that fail the most will be the ones under the most load and operating as close to, at, or over (eg. when overclocking) their rated design limits.

Critically, for our purposes, we should follow that having substantially lower temperatures has a great increase in the lifespan of a card. Of course, the card you are mining with now could die 10 seconds from when you read this, but statistically, if you had n number of cards, less will fail when running at lower temps.

It's important to realise that with things like GPU cores, MOSFETs and other transistors, and capacitors is that their mean lifespan does NOT vary linearly with increasing high temperatures. If you drew a graph it would look almost exponential, going exponential as you reach temperatures in the 80-120 deg C range for most parts you'll find on a graphics card. I suppose it is useful that AMD now has hard limits to the temps that cores/VRMs can reach, which are monitored by the hardware itself on the card and will force a driver crash, a hard lock or reboot the computer if they are exceeded. This stops noobs from buying a 7990 for $1000+ RRP and running the cores at 150 deg C and wondering why they now have a paperweight (but it's also useful if say, the thermal compound loses contact).

tl;dr - cooler temps = long life, but failures always can happen due to unrealistic user expectations, poor user handling of the card, design limitations of the cards and its part, cost versus product quality considerations, and the inherent material/manufacturing flaws in all microprocessor and integrated circuit devices.

For what it's worth I've abused countless 4870x2 cards back in the day at 90-100 deg C temps for YEARS of continuous operation (BOINC projects eg mostly MilkyWay@Home GPU), then quite a few 5970's, including one I've kept which is now watercooled and going strong for many months of 24/7 mining now. I have a 6870 that has also done at least 8 months of 24/7. I have a 5750 that is more lower-end of the spectrum that I've made run so hot it smelt like burning PCB (scrypt mining, probably memory controller or VRM overheat) but it still works flawlessly on SHA-256 crypto. I've also had (and later sold / given away to family or friends) several other 4xxx, 5xxx and 6xxx cards some of which ran distributed GPU computing tasks for months or years of 24/7... in fact I don't think I've ever had a card fail permanently.
sr. member
Activity: 312
Merit: 251
4 Months of mining now, 24/7.

5x7950, 2x6950, 1x7870.

All cards are happy.
sr. member
Activity: 302
Merit: 250
Apparently running GPU's at full load 24/7 decreases their life expectancy.  I've heard of some miners that are running their cards at over 100 degrees for months straight without any issues however. How long have you run your rigs before burning out your GPU? (Or without burning them out if you've been mining for a few years now)
Pages:
Jump to: