Without knowing the actual asic power consumption I would speculate it to be about ~2x more efficient. Using 20w/GH for the asic. Which may be optimistic imho.
I do see where you are coming from though. Even as cheap as an fpga is to power, if the difficulty goes up enough then it becomes obviously more profitable for asic. But, just how much @2x(my speculated number since we lack hard data) would difficulty need to go up? We need to chart or graph it out I think. My math skills are really pretty basic so I am not sure whether diffulty would need to increase the same as the efficiency difference between CPU/GPU, CPU/FPGA, GPU/FPGA or what.? We could use the historical dificulty to summize the growth % from cpu to gpu but it would be hard to pin down the point where gpu not only took majority share of the hash rate but where that would intersect with stale earnings for CPU. We would of course have to normalize the price/difficulty data. Even lacking good FPGA global hash data we could get pretty close to speculating it's difficulty apex. We would need to compare the CPU to GPU difficulty apex slope in relation to their efficiency. Then applying that formula to the GPU to FPGA difficulty in relation to efficiency. I can probably pen and paper it but it will take me considerably longer to trial and error the proper method. Maybe one of the more proficient academics here can lend a hand?
The questions then are how much will asics $/MH be? How much cheaper can FPGAs be made? I believe the asic $/MH will not be enough of a leap lower verses FPGA build costs to make FPGA payoff time unreasonable. To make this speculation I am considering an FPGA cost of 1/MH or less. Which is very doable now. LX150-n3 are street priced at $141, a cheap board and components costs $35 and assembly can be done for as low as $17. Total for ~200MH = $193 And the new series of Spartan are due out soon.
A couple of concepts which might enlighten you (or maybe muddy the waters even more).
sASIC (structured ASICS) are roughly 2x to 3x more efficient per watt than FPGA and have a per unit cost of ~1/2 to 1/5th depending on volume. (5K units to 50K units).
ASICS are roughly are more like 5x to 20x more efficient per watt than FPGA and can have a per unit cost as low as 1/10th that of a FPGA but really only make sense in volume of hundred thousands of units or more.
So it isn't like a sASIC would be more efficient BUT more expensive. It would be that a sASIC could be 2x as efficient per watt and half the cost. A true ASIC (even cell based) could be in the <$0.20 per GH and 100MH/W range.
Now I find it beyond unlikely we will see sASICS anytime in the next couple years. Startup capital is in the hundreds of thousands of dollars. We are talking about months of talent/salary, IP licensing, high end design software, FPGA prototyping (@ $2000+ per chip), test runs and contracted (and at a minimum partially prepaid) production runs, etc. An established player could do it for cheaper but a no FAB is going to trust a startup with anything less than full prepayment of 10K units.
True ASICS are even more unlikely as it requires even more customization and that means talent, more testing, and unless you want development time in years even more licensing of IP. Startup capital is likely in the low millions for current gen (45nm) ASIC.
So I think any FPGA bought today is safe from the threat of sASIC or cell based ASIC "future" designs at least for 3-5 years. Bitcoin would need to see significant stabilization and growth before it attracts the kind of capital necessary for those kind of designs.
Still remember FPGA are subject to Moore's law. 28nm FPGA are very scarce right now and priced off the chart but in time they will be mundane. They will deliver ~2x the performance per watt and per $ (slightly less but using 2 as multiplier is fine). That will be the true threat to current gen FPGA but even there it will affect new sales and resale value more than profitability for a long time.
And, just how many MH can an asic achieve?
This is a meaningless metric. Say you have a design which gets x GH. If you quadruple the size of the chip you could get 4x performance. So the performance per chip isn't relevent. A chip that has 4x the surface area will generally have lower yields. So as some point there is a "magic" size where the cost of multi-chip design balances the additional cost of larger chip.
If you could get a 1 GH board @ 15W for $100 would you really care if it was made up of 1, 2, or 4 chips. All that matters is performance per watt and performance per $ right?
Still to get a very loose ballpark figure.
Current FPGA gets about 1MH per square mm.
On a 45nm process a completely custom ASIC could maybe achieve ~20MH per square mm. On a 100mm^2 chip we are talking ~2GH/s. Of course there is no reason one would need to stop at 100mm^2. CPU/GPU come as large as 500mm^2. A chip that large could acheive maybe 10GH/s. However larger chips = lower yields. Likely first gen ASIC will be designed small.
What would be the estimated power usage of a 1GH asic? using 'asic' as a blanket term for all variations, s-sasic, f-asic, etc.
Well you can't lump together all ASIC as they get vastly higher efficiency as you move up the cost ladder.
sASIC - lowest upfront cost, highest per unit cost. Still roughly 2x the efficiency of FPGA (in performance per watt and performance per $).
cell based ASICS. higher upfront cost, significant risk, much lower per unit cost.
Custom ASIC. huge risk, massive upfront cost, "negligble" per unit cost.