That's not true either. As a silly counter example, consider this algorithm: "Given a sequence of x86 assembly instructions, run them until they return, and tell me what's in the registers". You really think you can make an ASIC that's faster at that task?
Exactly. What matters is not theoretical but practical. Whether it is worth the development cost depends primarily on how big an advantage the ASIC would have over a CPU. If the algorithm were constructed such that CPUs were already nearly-optimal (for example, if it required lots of branches and lots of memory), there would be no cost justification for developing an ASIC. Instead, miners would just use lots of CPUs.
I think if we had it to do over, we'd pick a mining algorithm that requires lots of memory and lots of branches. But I don't think it's at all possible to change things now.