I speculate, because the RIG was never working and the DESK had no working power save nor turbo mode and now they know they never will able to make it.
If the errors are just in the output-stream of junk that pools demand, for POW... Then this will not even matter to a solo-miner. They only get one solution back, not streams of junk shares. They may never see an error, because there is no stream of flooded data sent back to the wallet they are mining directly into.
From the unit that was tested a few posts back, the turbo-mode has a high error-rate of about 17%, which I assume it is choking on in a software issue. I imagine that the issue can be some-what resolved, playing with timing delivery and frequency-tuning.
There is obviously a few misalignments in the data-flow, and/or issues with the reading-calls. (Can happen if the rise-edge is not powerful enough to be detected fast-enough to initiate the capture of that data packet, which results in lost packets, or missing bits to the data-stream.) Usually they try to fix that with voltage-bumping, but when that fails, it is a frequency-interference or noise related issue.
If all the packets showed as bad, or they were remaining bad on all speeds... then I would say there is no hope. However, that still seems like a software issue. (Similar problem with 7970's when mining scrypt. Just a slight bump in the frequency can gain you 20% more speed... one more bump, and it is all out of alignment, and you lose 40% speed.)
Still, I think the new design without the daisy-chain will allow them better control over the whole unit. Since each chip will have slight impurities, each may actually need a separate operating frequency and voltage. Where as now, I believe it is all or nothing, for settings. One setting sets all chips. (That can actually be alleviated by frequency-matching chips to the supporting components, before mounting them. That, or measure each components actual values, and sort them, matching them to each board. That is unrealistic to do, without machines to do the testing for you. Which requires a lot more work to do. Easier to just say +/- 20%
)
Hell, even something as un-obvious as having one bus-wire longer than the other, can cause enough resistance and capacitive interference to throw-off the entire data-stream under high-speeds. Data likes tuned lines of equal length and resistance, like the headers and exhaust in a race-car. That is why ribbon-cable has been used for ages and some data-lines have zig-zag patterns when they are too short from point-a to point-b. The shortest path is the best, only if they are all equally short.
(PCIe had this issue at first. But now, they resolved it, by matching each data-lines linear resistance.)
However, for those of us who know how to manipulate these things... We may still have a chance at getting that perfect tuning, per chip.