Some people have reported their Fermi cards reaching temperatures of ~110C. I don't think AMD cards would survive that.
1. You're ignoring the leakage issue I pointed out. Like I said, the snowball effect, a negative feedback loop, is a bad thing to deal with, it's best to stay clear away from the beginning.
No, I'm not. The source of the heat is not relevant, and of course staying cooler would always be safer.
You're not making any sense, the source of the heat is very relevant.
The snow ball effect, go read up on it, increased temperature -> causes increased leakage -> causes increased power consumption -> causes increased temperature -> causes increased leakage -> causes increased power consumption -> ....
I believe I have spelled this out in the simplest way possible for you. This is an issue that all chips from TSMC's 40 nm fab has, no matter if you're named AMD or Nvidia.
We weren't talking about VRMs, I'm well aware that they get much hotter. On one of my cards it's 109 right now.
You're wrong, we are. We're talking about video cards supposedly "built to last" a certain temperature, and VRMs are very important parts of this, and AMD's reference design VRMs are of significantly higher quality than Nvidia's. You will see in the reference design of a small gpu like the HD5770, even that small one has VRMs that are higher quality than a gtx470.
This is made possible thanks to the size of the chips, AMD's chips are physically smaller, for example a HD5870 is physically smaller than a gtx460. At the same time the HD5870 is also much faster. This causes the value of the 5870 to increase (higher quality components can be afforded in the reference design), while Nvidia has to skimp on component quality to have any sort of profit. I hope you understand that, AMD's reference design components are of higher quality.
4. Fermi is made from exactly the same silicon as AMD's chips, they come from the same manufacturing process, that same manufacturing process is from the very same semiconductor foundry, as far as the core goes, the only argument on your side is that Fermi is a physically larger chip and that the heat is distributed over a wider area surface, but I seriously doubt that this has much weight when put up against the other points I made.
Even though silicon is generally silicon, that doesn't mean that some designs can't be more resistent to heat problems than others.
When the manufacturing process is the same, then you're wrong. This isn't an intel vs amd deal here, this is a tsmc vs tsmc deal. It's the same shit, so to speak, the chips both come from TSMC, they both share the same issues.
Which is probably why I said *most*...
....
Of course you can. If the component doesn't touch the heatsink or the GPU and it doesn't create a lot of heat on it's own, it will be cooler than the GPU. Wheather or not it touches the heat sink, it will be cooler the further away from GPU you get. The designers of graphics card aren't stupid, and take this into account.
Don't be naive, the designers of custom boards will prioritise cost savings more than anything else, (unless we're dealing with extreme editions like MSI lightning, ect.)
And you are wrong when you imply that the gpu temperature has no effect on the other components on board, the heat WILL transfer on to the other components, whether it be heat transferred by airflow, or heat transferred by PCB. (The PCB is actually pretty good at transferring heat, try to put your hand on the back side of your video card, and you'll see what I mean. There is no magic to be pulled off here, things will get hot)
As a matter of fact, the Video card itself (!!) has an effect on the motherboard component's temperature.