An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic. ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth. By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly. GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation. GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again. A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week. By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely. It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system. If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach.
Great Post and so true...
If they want a leveled plane of mining, that should be the way...
Best Regards,
LPC
Ya, so there's already coins that do this. YACoin was the first, and currently takes 4 MB per thread to complete a calculation. That will be 8 MB on May 31st. All the other scrypt-chacha coins will get there eventually, but YAC is the trailblazer