ASICs can calculate hash functions very fast but not practically have much memory. CPUs can have access to a lot of memory but have slow calculation of hash values.
The PoW algorithm here needs to demand separate memory for each hash function core in the ASICs. Otherwise lots of cores could share a single memory.
By requiring lots of memory accesses, a single shared memory in an ASIC will become a bottleneck.
This can be achieved by having a pre-calculated list of random numbers, large enough so that ASICs will have trouble allocating enough memory to each hash function core. And a random jump indexing in the list is done by
taking the initial hash value and repeat hashing it a number of times. The number of iterations should be large enough so that memory access becomes a bottleneck for ASICs with many hash function cores. And the number of iterations must be low enough so that validation of blocks becomes fast enough.
EDIT: No, not hashing several times. The first index position is h MOD N, and the index jumping is done with XOR of (h / N) MOD N. Which means taking the lowest part of the hash value as the first index position and then using another part of the hash value for the random index jumping. Then a huge number of random jumps can be done with very little computation yet with loads of memory access. The list of random numbers can remain fixed. And the two parts of the hash value prevent shortcuts. (Unless I have overlooked something. I'm too lazy at the moment to think this through carefully.
)