Anyone ever heard of a datacentre using equipment other than air conditioning, I haven't.
For peace of mind I run my expensive hardware investment with an air conditioner that will always keep the room at 19 celcius - I can go to work, go on holidays, watch tv and be assured that my machines are at a steady temp.
My rigs use 5kW/h and my air con can cool 7kW of heat with just over 2kW of power. Why invest big and then skimp on cooling? Cooling needs to be factored into the cost and is as important as power surges or dirty power - it will destroy your hardware!
With air conditioning you can't introduce separate systems as it messes with the airflow. My GPU's on my 6990's run at around 70 celcius or under continuously. This is with 3x 6990's in each rig running at a standard 830Mhz per GPU, they are also running inside haf x cases with a modified side panel (professionally designed and cut via an engineering company) to fit 4x120mm delta fans for air flow. These fans have dust filters also. Each rig uses about 1.2kW/h and there are four in total. Just to add the fan speed set on the 6990's themselves is 70%, though it can manage no problem at all on 60%.
If your serious then professional is the way to go. Jobs isn't still running his company out of his garage. Business (i.e. investment) requires industry standards. Otherwise it's a joke.
I see people running custom cooling solutions and are just about keeping them under 85 celcius. What if it's a hot day and they pop up to 95? Do you panic? Do you call in sick to work? Do you fly back from your holidays?
I am also considering down clocking my GPU's rather than having them maxed out at 830Mhz all the time - a few less hashes isn't going to kill me. With regard to overclocking, I wouldn't consider it. Cards are certainly not designed to be maxed out all the time, never mind overclocking them.
google for "Data center liquid cooling". 3 million results. It isn't common, but it is not heard of. Keep in mind, bitcoin miners with 4 or even 8 GPUs per system is a lot of heat output per sq foot, well into or past the more exotic data centers like blade servers, etc.
Also, on the note of 25kw of lighting, keep in mind that a high density server cabinet can hit 30 kw all by itself right now, with projections up to 50kw coming in a couple years (Source:
http://www.42u.com/liquid-cooling-article.htm). Was all 25kw of lighting confined in a 42u sized space, and, also important, all stacked on top of each other? On that note, remember, that is one cabinet, you sure can get a lot of those in one room.
Also, most data centers these days are looking into a lot of other options to improve PUE, heat wheels, liquid cooling, etc, at a certain point, it isn't efficient to just throw more AC at it.