I'm not entirely certain that this can be clearly proven given what's on the table. BFL provided statements that they were experts in this area with a history of delivery. The bona fides were lightweight in terms of evidence, but argue against significant ignorance.
While I don't expect you to take my word for it, please check around with other FPGA and IC developers and ask them how a Bitcoin application compares to basically every other FPGA application in terms of chip utilization. The unusual requirements of mining compared to basically all other applications of FPGAs impose unusual requirements that a typical designer would not factor in without prior knowledge of the situation.
I don't know you. You seem intelligent but a little too emotional on this particular issue. This could be because you believe in the longer-term goals of BFL and are offended at what amounts to pure BS on the part of many of its detractors. I understand this, but it's the Internet. Use the Ignore link with great prejudice. I do, and it helps my outlook significantly.
I'm sorry you feel that way and that some how my deconstruction of your arguments are somehow "emotional." I have no personal stake in BFL, but I to take umbrage to the fact that people spout all sorts of misinformation and outright lies (not saying that's the case here, I'm referring to another thread) and I would defend the subject with the same "emotion" you are attributing here. I'm highly opposed to bullshit and armchair lawyers, yes it's true.
Given that fact, it's not unsurprising that even experienced designers would be surprised and appalled by the requirements of a bitcoin miner. Throwing in a little bit of veritable conjecture: I suspect this is exactly what happened to LargeCoin when they realized that their initial ASIC designs were not going to meet their targets, as they were designed with traditional simulations and not mining simulations. Which, I suspect, is why they abandoned the LargeCoin unit, since it wouldn't be anywhere close to what they wanted.
The claim would be reasonable had it not been so easy for smaller shops (ngzhang, ztex, etc.) to deliver FPGA-based solutions in a timely fashion. The programming is obviously accessible to a reasonable practitioner in the craft, given the number of byte streams that have been produced that push the LX150.
What does the programming have to do with it? The bitstream has never been a bone of contention as far as I know (someone correct me if I'm wrong) - the only bone of contention has been the power usage (which directly correlates to the hashrate as related to heat). I'm sure none of the engineers you have listed would argue that the chips BFL uses are incapable of producing a 1 GH run for brief periods, as described by "normal" FPGA applications. The breakdown occurs when you try to mine at upwards of a 50% switching rate, instead of the industry norm of 12%... suddenly those 1.2 GH/s chips start to overheat and fail at 50%, whereas they can run all day for years at 12%. Any FPGA designer coming into that territory and being unfamiliar with bitcoin, yet familiar with industry standards would conceivably make that mistake.
I am not making excuses for BFL or their failure to deliver. I am simply pointing out why your argument is flawed. I am sorry if that offends you or somehow puts you on edge, but the facts are facts. BFL could have easily delivered a comparable product to Ztex, nghzhang, et al, since their hashrates are so far removed from what BFL was offering, even AFTER the reduction... but the fact that they were offering a product that, after refactoring, was 4x the speed for 1/2 the cost should afford them quite a bit of leeway when it comes to the very first product delivered. Using the other products as examples is disingenuous at best, since they are fall so very short in terms of performance and price.
The FPGA-to-ASIC process is capital intensive but not difficult. It's a very well-worn path. We have tools such as Verilog and friends. We can prototype on the FPGA, validate with various circuit validation tools (OK, these are _all_ buggy), simulate (slowly) on our beefy workstations, and have a reasonable shot at a successful IC, especially one as simple as a BTC ASIC. This isn't a Pentium 60, where you're going to run into corner cases with FDIV. The rest is glue and IO, and the smart move is to leave the ASIC as dumb as possible and leverage existing tech for this.
I don't disagree with this... but I'm not sure what is has to do with anything or how it's relevant to your previous statements. If you could clarify?