Neither of us knows the answer to this question. There is a chance that it's stronger. But there's also a very real chance that the requirement to find a cycle, instead of a simple collision, will mean that the approach can yield to something with sublinear memory requirements and, in fact, underperform compared to Momentum in its memory requirements.
We know finding two incident edges in a graph is (way) easier than finding a 42-cycle.
So I don't know what you mean by underperform.
I mean that the statement you said there is not necessarily true! Let's try phrasing it as a question:
Given a fixed hash rate budget and desired rate of finding blocks,
does it require more or less memory to solve the problem of finding a single colliding pair in a large sparse graph,
or to find a length-L cycle in a smaller, more dense graph?
Note the difference in those two problems: In order to have a non-negligible probability of finding a large cycle, your problem requires a much more dense graph than a problem such as Momentum. Furthermore, it induces a relationship between the elements in the graph that is not present in Momentum. In M, you can approximately analyze it by looking at the independent collision probability. That's not necessarily the case in CC.
This is why I keep mentioning sampling. Consider this statement:
Given L=42 in the cuckoo cycle problem with N < 2^32,
if a graph G contains an L-cycle,
a random sample of x% of the edges in the graph will contain at least two edges in that L-cycle with probability > 999,999 / 1,000,000.
Math edit: Replaced 10% with x% -- there is some x for which this is true.
You can go check the math yourself - it's just a binomial with p=x%.
Does this mean anything? I don't know -- but it tells me *immediately* that the problem requires a different kind of analysis than what you would do for Momentum! For example, what if you started by sampling lg(n) edges and then growing and pruning that set through 42 rounds? With some reasonable probability, you'd extend into one of the cycles. Not guaranteed, but it doesn't have to be guaranteed - I can always start over.
[/quote]
Sure, but again: The best way to design a strong algorithm in a security context is not to generate ideas until you find one you cannot disprove. It's to start from things that are known to be hard and to base the security of your algorithm upon the hardness of those things.
I feel that unnecessarily limits the PoW landscape. And could have left prime-based
PoWs out in the cold as well.
Not at all. The primes have an entirely different goal. The point is that you're advertising cuckoo cycle as having a certain set of desirable properties. Someone shopping for a PoW wants a certain set of properties - some people want the social/scientific good of primes. Some people want memory. Some people want fuzzy teddy bears. (grin). For the advertised properties, there needs to be a stronger proof.
I'm not trying to prescribe what choice or properties any coin should pick from a PoW. That's an entirely different question.
I've already told you how to modify the algorithm I explained to use 1/2 bit per nonce with a 4x slowdown. The idea generalizes to using 1/k memory for a > k^2 slowdown. This is a good tradeoff, but it's not the be-all-end-all.
"(There is a way to slightly reduce the memory demands: Split the bit vector in half, and record the live bits only for the first half across multiple iterations of the test, using the full second half. Then store this bit vector in a compact way, and perform the computation on the second half. This doesn't extend very far past a factor of two reduction, but it does require only a roughly linear increase in the amount of computation and could possibly get the storage needed down to about 300MB for 2^32.)"
Yes. It's non-trivial to "store this bit vector in a compact way" but it can be done. See the SDSL link I provided, for example.