This is my whole point. If there is enough money in it, some entity will do it. But only companies with a very good funding will be able to do it.
Fast forward to 2017, everybody is using Vertcoin now because they thought it will free the cryptocurrency world from "those ASIC companies". But, boom! AMD just developed an ASIC miner, because their GPUs are really good at mining anyway, so they just removed everything not needed for mining, no video output etc, high power VRMs for insane clock speeds, loads of high-speed memory (because they are such a big company and buy tons of it they get it cheaper than anyone else), so they run 10-100x more efficient (both MH/s/J and USD/(Mh/s)) than the GPUs they sell. So now all the hashing power basically lays in the hand of one company. Because the design of the ASIC is very complex, no other company competes with AMD. What now?
With SHA, you could even go the "ultra cheap" route and do a hardcopy ASIC with existing (open source) HDL code. How many SHA ASICs do we have? A dozen? They all compete and are at least an oligopoly.
Anyway: The statement "No more ASICs" ist just not true. It should say "Currently no ASICs".
ShadesOfMarble,
Also read tsh's reaction to my post though, he brings up an interesting point regarding the powerconsumption/efficiency of Asics.
Edit: Wait, I'll paste it in:
ShadesOfMarble does have a point though, IMO. Asics (and FPGAs) are sort of in between hardware and software. The GPU (the core itself) is basically an ASIC. With lots of functionality that's not required for mining.
I think this misses the reason that mining specific ASIC are efficient. The GPU core is a parallel vector compute engine, in effect a specialised CPU. I'm not sure that there is a big overhead on a graphics card (above a few $$ driver chips) which could be saved. The power benefit of a SHA or SCRYPT ASIC comes from the fact that they are not programmable. The single function that they compute is hard-wired (hence it is impossible to re-target them effectively). The same reasoning is behind ASIC being more efficient than FPGA (although the mechanism is different). Silicon resource that is not used 100% every cycle is a cost - but is needed if you need to support configurable algorithms.
An algorithm which pushes the energy cost almost completely to pushing data through RAM will further reduce the benefit to be made by doing the compute side efficiently.