I don't think my mining rigs would be profitable at UK electricity costs unless they were all overclocked - the difference is very substantial (e.g. 5850s usually ship with 725 MHz cores, but most run at 900 MHz even with an undervolt. This is the difference between 300 MH/s and 380 MH/s, at the same power consumption when the memory clock is reduced to 300 MH/s).
Just be clear: the powerconsumption of your GPU will scale linearly with clockspeed (and exponentially with voltage). So overclocking does very little to improve effciency/w. It mostly helps efficiency/$. But the higher you clock it, the more power it will consume and vice versa. With constant voltage, the power efficiency of your GPU remains the same at the higher clock, in fact, it might even drop a tiny bit because it will run hotter, and hotter gpus draw more power (and require higher fan speed, which is also power, however little). That effect is marginal though, and probably more than offset by the constant power consumption of the CPU, motherboard, ram etc, which becomes a lower % of your power as you increase hash rate of the gpu's.
Understood, hence the comment re: reducing memory clock, which you redacted... My measurements are based on consumer-grade equipment (the UK equivalent of a Kill-A-Watt - they look the same, but ours aren't branded with that name) so may not be super-accurate... but reducing the memory clock made up for the GPU clock increase and reduced temperatures substantially.
It's counter-intuitive, since the memory isn't used hard by the bitcoin OpenCL kernel and the temperature readings are presumably of the GPU die and not the surrounding area where the memory chips are mounted. But running 1250 MHz GDDR5 memory appears to take considerable amounts of power, and dropping the clock to 300 MHz makes a significant difference.
I still stand by my claim that a standard 5850 running a bitcoin miner will use the same amount of power at the wall as one with the GPU clocked up to 900 but the memory clocked down to 300. Even if I'm very slightly wrong or there's 5% error in the readings due to low-grade power-meters, it's still a large factor in profitability when electricity is at UK rates.
I don't contend your argument re: lifespan... even running completely standard clocks, most of these GPUs simply weren't designed for 24/7 computation. However, intermittent gaming results in severe heat and power cycling, from idle to full-power, which is far harder on most machinery than a constant load. Only time will tell whether the constant load I'm running is worse for the GPU than repetitive heat cycles