Hmm... Unless AMDOverdriveCtrl has changed since I last downloaded it, it doesn't know how to change the voltage. I mean... you can change it, but the change doesn't actually take place. You can see the actual voltage in the far right text box next to the gpu/mem frequencies (not the ones at the bottom though). Like I said, I haven't updated it in quite a while though. Maybe support has been added?
Are you sure that the temperature change isn't just from the memory frequency changing? I'll have to give this a try tomorrow...
Hence I'm using AMDOverdriveCtrl from the command line. If you haven't tried this - do it! Not only does running AMDOverdriveCtrl in batch mode (-b) show you the default settings (core and memory clock, plus core voltage!) but also shows you which 'devices' are active (i.e. most GPUs will have a separate 'device' for each output, whether connected or not - the default is always connected) so you can choose which device ID to program.
I've found that many manufacturers have interestingly different voltages for the same GPUs, and also that all of them (5xxx and 6950) will tolerate a light undervolt and *still* overclock to the previous max overclock. Saves me quite a bit of power at the wall, but more importantly, given my 12-GPU high-density shelf rig, keeps them from cooking.
I'm surprised you're not running AMDOverdriveCtrl from the command line already - do you use VNC or remote X to control your farm? You've got a LOT of hashing power so must have a LOT of boxes...
I'd be very interested to have it confirmed that the tool is, in fact, undervolting the cards like it reports it is... I can't stand ANY tools that report success when they have failed to perform the command...