With solid cooling we managed to run 3 chip at 91-93 GH/s stable for 3-4 hours.
With our test software and firmware it seems that this is the limit is here.
The strange is that we pushed the voltage up to 0.88 V in order to make it stable .
The thing with voltage-regulators is that they operate on a "frequency" themselves, and unless you actually "test" each output, the on-board voltage settings and measurements are "ballpark".
When the frequency is better matched to the draw-load, in combination with the capacitors and draining resistors, you operate with better "consistency", as there is no "drop-outs" of voltage/amps... Well, less drop-outs or brown-outs. (Voids which don't stop the attached components from operating, just from operating at "peak" performance.)
FYI: The mini ultra-caps or super-caps make the perfect post-regulation voltage stabilizers, in addition to a mini joule-thief circuit or torrid-filter. (That allows adequate amperage and nearly perfect frequency-irrelevant power to be sent to the post-components. just as if it were a battery DC solid voltage supply going to the components.)
Video-cards are the same way. You can bump voltage by one decimal up, and the card will run almost 2x better. Bump it up one more decimal, and you are now in the "odd notches" of the cap/resistor/regulator and the card runs 1/2 as good. Bump it up another decimal, and you are back in normal operation... bump again, and you are back to 2x performance. Not to mention, the program may "detect" something like 0.88v but when you actually measure it with a real meter, you see it is more like 0.85-0.92v. Usually way off, and non-linear from one voltage setting to the next. Those detector circuits are cheap and uncalibrated, or only calibrated and accurate at room temperature, for a moment in time. However, they do as they were intended... let you control "higher and lower". They were not designed for accurate measurements. More like, differential from what should be "factory-calibrated" unique values. Not just "accepted for face value" by an external program that has no idea what the actual value is. (Normally, a calibration profile is set in a bios-like chip, read by an external program, and THAT is the adjusted display. Which usually also takes operating temperature into consideration, in the profile adjustments. But that involves a lot more work on the MFG of the board. It is easier to just accept the value the chip spits-out, as "ballpark". I have 48 video-cards, all have a different temperature value, at room temperature, all doing nothing. I can plug-in one voltage profile for one card, and it will fail in another card, and result in crazy temperature readings. My thermal imager and voltage testers show that nothing is absolute in any uncalibrated devices.)
However, it is nice to know the "ballpark" limits.
Throw an oscilloscope on the line, and see if that voltage, and the prior ones, had drop-outs and notches that the caps and resistors were not "keeping up with". I suspect you will see erroneous voids and cap-drain issues, with amperage pulses.