Pages:
Author

Topic: Cuckoo Cycle Speed Challenge; $2500 in bounties (Read 6520 times)

legendary
Activity: 990
Merit: 1108
In that post I showed a comparison between the reference and tmto implementation at memory parity, which showed a factor 70 slowdown and a factor 1-1/e reduced solution finding rate (due to sampling a fraction 1/42 of all vertices as starting points for the depth 21 breadth first search).
Discounting for the lower solution finding rate, the tmto implementation was 111 times slower.

I tried another approach, which turns out to be superior. First of all, assume that the graph we're working on has a unique 42 cycle, and we hope to find it by sampling a small fraction of nodes, and exploring their local neighbourhood with a depth-limited breadth-first-search (BFS).

Instead of doing a depth 21 BFS , which (roughly) requires starting from a node on the 42-cycle, we can do a much deeper search, say depth 42. Then we'll find the 42-cycle as long as we start from any node with distance at most 21 to the 42-cycle, which seems much more likely.

Apart from doubling the BFS depth, there is another difference. In the former approach, we only care about nodes on the cycle, and can eliminate any that are known to be off the cycle, such as those with only one incident edge. In the latter approach, we cannot eliminate these. So the BFS tree will be much larger in size relative to the starting set, not only due to doubled depth, but to less elimination as well.

To keep space usage down, we thus need to shrink the starting sets even more.
I ran some experiments which indicate that the increased likelihood of finding the cycle more than makes up for having smaller starting sets.

The updated tomato_miner.h can now use k times less memory with a (discounted) slowdown of roughly 25*k. Further details will appear in the whitepaper, in a week or two.

With this progress on the time-memory trade-off, the partial bounty payout, and the relaxation of the slowdown, I'm reducing the TMTO bounty from $1000 to $500.
legendary
Activity: 990
Merit: 1108
This miner will be available as tomato_miner.h / tomato_miner.cpp
I will experiment with this some more before updating the TMTO bounty...

Weird, the post I was quoting has disappeared entirely.

In that post I showed a comparison between the reference and tmto implementation at memory parity, which showed a factor 70 slowdown and a factor 1-1/e reduced solution finding rate (due to sampling a fraction 1/42 of all vertices as starting points for the depth 21 breadth first search).
Discounting for the lower solution finding rate, the tmto implementation was 111 times slower.

For completeness, I include below the output of the reference implementation running on 2 threads,
showing a factor 61 difference (rather than the 70 single threaded).

-John

Code:
> time ./cuckoo25.1 -t 2  -h 157
Looking for 42-cycle on cuckoo25("157") with 50% edges, 11 trims, 2 threads
Using 2MB edge and 2MB node memory.
initial load 6400%
round 1 part U0 load 5222%
round 1 part U1 load 4045%
round 1 part V0 load 2970%
round 1 part V1 load 1895%
round 2 part U0 load 1508%
round 2 part U1 load 1121%
round 2 part V0 load 934%
round 2 part V1 load 746%
round 3 part U0 load 641%
round 3 part U1 load 535%
round 3 part V0 load 469%
round 3 part V1 load 403%
round 4 part U0 load 359%
round 4 part U1 load 315%
round 4 part V0 load 284%
round 4 part V1 load 253%
round 5 part U0 load 230%
round 5 part U1 load 208%
round 5 part V0 load 191%
round 5 part V1 load 174%
round 6 part U0 load 161%
round 6 part U1 load 147%
round 6 part V0 load 137%
round 6 part V1 load 127%
round 7 part U0 load 118%
round 7 part U1 load 110%
round 7 part V0 load 103%
round 7 part V1 load 96%
round 8 part U0 load 91%
round 8 part U1 load 85%
round 8 part V0 load 80%
round 8 part V1 load 76%
round 9 part U0 load 72%
round 9 part U1 load 68%
round 9 part V0 load 64%
round 9 part V1 load 61%
round 10 part U0 load 58%
round 10 part U1 load 55%
round 10 part V0 load 53%
round 10 part V1 load 50%
round 11 part U0 load 48%
round 11 part U1 load 46%
round 11 part V0 load 44%
round 11 part V1 load 42%
   4-cycle found at 1:90%
 166-cycle found at 0:98%
  42-cycle found at 1:98%
  76-cycle found at 1:99%
Solution 223e7 56b1b 86b2f cb0a2 106629 233b7e 253ee1 290e4e 2e15cc 2efe4b 32bc8e 356dfd 35ef96 3dd854 534d37 57ee85 5abaa1 5d7d49 60ca66 662933 718651 78554b 7d8e01 7f7360 886c55 8a6448 8e4fd2 9674a2 98e431 b00d71 b16050 b1b561 b362d4 b9e539 ca3ab0 cb4f28 d2a53b d53bc3 e213cc e40bf1 ed5b2a faef36

real    0m6.957s
user    0m10.709s
sys     0m0.077s
legendary
Activity: 990
Merit: 1108
This miner will be available as tomato_miner.h / tomato_miner.cpp
I will experiment with this some more before updating the TMTO bounty...

The tomato miner is now up, and bounty miner is restored as a copy of cuckoo miner.

The TMTO bounty in the OP has been updated, allowing an extra factor 10 of slowdown.

Below is the slightly improved output of the tomato miner, shown running almost twice as fast by using 2 threads.

Happy continued bounty hunting!

-John

Code:
> time ./tomato25 -t 2 -h 157
Looking for 42-cycle on cuckoo25("157") with 50% edges, 1/64 memory, 24/512 parts, 2 threads
Using 4MB node memory.
vpart 0 depth 21 load 78%
   4-cycle found at 1:5
   4-cycle found at 1:6
   4-cycle found at 1:7
   4-cycle found at 1:8
   4-cycle found at 1:9
   4-cycle found at 1:10
   4-cycle found at 1:11
   4-cycle found at 1:12
   4-cycle found at 1:13
   4-cycle found at 1:14
   4-cycle found at 1:15
   4-cycle found at 1:16
   4-cycle found at 1:17
   4-cycle found at 1:18
   4-cycle found at 1:19
   4-cycle found at 1:20
vpart 1 depth 21 load 78%
vpart 2 depth 21 load 78%
vpart 3 depth 21 load 75%
vpart 4 depth 21 load 77%
vpart 5 depth 21 load 76%
vpart 6 depth 21 load 78%
vpart 7 depth 21 load 76%
vpart 8 depth 21 load 77%
vpart 9 depth 21 load 79%
vpart 10 depth 21 load 78%
vpart 11 depth 21 load 76%
vpart 12 depth 21 load 76%
vpart 13 depth 21 load 77%
vpart 14 depth 21 load 76%
vpart 15 depth 21 load 78%
vpart 16 depth 21 load 78%
vpart 17 depth 21 load 78%
vpart 18 depth 21 load 78%
vpart 19 depth 21 load 75%
vpart 20 depth 21 load 77%
vpart 21 depth 21 load 75%
vpart 22 depth 21 load 76%
  42-cycle found at 1:20
vpart 23 depth 21 load 79%
Solution 223e7 56b1b 86b2f cb0a2 106629 233b7e 253ee1 290e4e 2e15cc 2efe4b 32bc8e 356dfd 35ef96 3dd854 534d37 57ee85 5abaa1 5d7d49 60ca66 662933 718651 78554b 7d8e01 7f7360 886c55 8a6448 8e4fd2 9674a2 98e431 b00d71 b16050 b1b561 b362d4 b9e539 ca3ab0 cb4f28 d2a53b d53bc3 e213cc e40bf1 ed5b2a faef36

real    7m5.287s
user    12m27.616s
sys     0m2.345s
newbie
Activity: 51
Merit: 0
I remember seeing this algo suggested @ Nyancoin's reddit several moths ago. Thought nobody would give it a try, and now there is even a nice bounty to make it work..!
legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
legendary
Activity: 990
Merit: 1108
For each level of the breadth, use one O(N) pass through the entire edge set by generating the edges and matching them against the leading edge nodes in the BFS tree.

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

Hmm, seems we were overlooking the obvious.

You can just feed all the edges generated in each pass (having one endpoint already present in subset P or as a key in cuckoo_hash) to my cycle finder (lines 357-390 of cuckoo_miner.h, tweaked to ignore duplicates/2-cycles). It doesn't care where the edges come from or how they are located between successive D_i layers. It will even happily report cycles that bounce back and forth multiple times between layers.

This should be relatively easy to code...

Having a rough version done, it looks like the slowdown for saving a factor k in memory is roughly k times L, where L is cycle length. So, to use only half the memory of the edge-trimming implementation causes two orders of magnitude slowdown. If this is the best one can do, then Cuckoo Cycle is still memory-hard in a practical sense, with cycle-length becoming a memory-hardness parameter.

I'll have more detailed numbers after fine-tuning the implementation...
dga
hero member
Activity: 737
Merit: 511
- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The current code already trims the edges down to about 1.6%, in order to allow the cycle finding to run in the same memory footprint as the edge trimming. To run the cycle finding in only 2^14 words, you'd have to trim WAY more?!

Quote
The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

That would make the situation very similar to that of Momentum, where the linear TMTO applies to the original implementatoin using N=2^26 words, but not to your trimming version using N+2N/k bits (assuming it recomputes SHA hashes instead of storing them).

Quote
I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

For a linear TMTO of my original non-trimming Cuckoo Cycle implementation I will offer a $666 bounty.


I'll take you up on that. Smiley  (BTC address is in my signature).  It seemed worth doing the theory bits more solidly before trying to optimize a potentially asymptotically-wrong, or even big-constants-wrong, algorithm.  And I appreciate you extending the bounty to include that.

While nothing's perfect, there are a lot of coins that could learn from the way you're approaching the development of your PoW ideas.  Or maybe they shouldn't -- there are a few CPU/GPU/etc. hackers who might go out of business. ;-)

(btw, I re-tested with 8 iterations, and it, like basic edge trimming, reduces the set to about 1.8%.  I'm on battery, so I didn't want to test 9.  Dunno what i was thinking coding this in Julia.  So this is an effective TMTO down to 1.8% compared to the original.  Not perfect, but not too bad, either -- certainly enough to let it run in SRAM.)
legendary
Activity: 990
Merit: 1108
For each level of the breadth, use one O(N) pass through the entire edge set by generating the edges and matching them against the leading edge nodes in the BFS tree.

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

Hmm, seems we were overlooking the obvious.

You can just feed all the edges generated in each pass (having one endpoint already present in subset P or as a key in cuckoo_hash) to my cycle finder (lines 357-390 of cuckoo_miner.h, tweaked to ignore duplicates/2-cycles). It doesn't care where the edges come from or how they are located between successive D_i layers. It will even happily report cycles that bounce back and forth multiple times between layers.

This should be relatively easy to code...
legendary
Activity: 990
Merit: 1108
This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

Indeed. I checked that the Yuster/Zwick paper doesn't add anything relevant for us over the Monien paper (it extends results for dense graphs), so I'm studying Monien's "how to find long paths efficiently" paper now...

Quote
btw - I wasn't actually clear on the definition of the CC PoW in one way:  Is a valid proof any sequence of unique edges that form a cycle, or must the cycles be completely node-disjoint as well?

As the OP mentions, the verification checks that each node is incident to exactly two edges, so yes, the cycle must be node disjoint.
dga
hero member
Activity: 737
Merit: 511
Now that I've had a bit of time to digest it, it strikes me that a better way of phrasing my algorithm might be this:

Select |P| initial nodes.

Begin a breadth-first-search from each node n in P.

Upon reaching a terminal node (one with only one incident edge), remove the edge to that node, recursing as needed if that removal causes another node to become terminal, and so on.  If a node in p becomes itself terminal, remove it from the set and remove the outbound BFS chain from it.

For each level of the breadth, use one O(N) pass through the entire edge set by generating the edges and matching them against the leading edge nodes in the BFS tree.

This description alone is effective for edge trimming, but building on this for directly detecting 42-cycles requires more careful specification of how to handle the BFS graph representation and what to do about cycles in it.

btw - I wasn't actually clear on the definition of the CC PoW in one way:  Is a valid proof any sequence of unique edges that form a cycle, or must the cycles be completely node-disjoint as well?
dga
hero member
Activity: 737
Merit: 511
Thanks, Dave. I appreciate the quick release. Will start going over it tonight.

You're on to something here!

In fact, l think you may be able to avoid the original union-find like algorithm,
and try to recognize 42 cycles starting from any D21 set.
If a node in D21 has 2 or more neighbours in D20, then you can work
your way back and try to find disjoint paths back to the same starting node in P.
(it suffices to check disjointness within each Di)


Must zzz, but yeah - that's kind of the approach I was trying to figure out how to express/implement when I talked about a sampling based approach working for this problem.  (See the last paragraph of my initial review post.)  It took me a long time to wrap my head around it and I think you just put it better than I was able, but it's that core idea of being able to start from a subset, prune it down, and then expand out and try to find cycles that participate in that subset.

If your subset is 20% of the nodes, for example, you're pretty likely to find the 42 cycle if it exists.
legendary
Activity: 990
Merit: 1108
Thanks, Dave. I appreciate the quick release. Will start going over it tonight.

You're on to something here!

In fact, l think you may be able to avoid the original union-find like algorithm,
and try to recognize 42 cycles starting from any D21 set.
If a node in D21 has 2 or more neighbours in D20, then you can work
your way back and try to find disjoint paths back to the same starting node in P.
(it suffices to check disjointness within each Di)

This approach sounds pretty similar to what Monien and Yuster/Zwick were doing,
so I'm going to have to go back and study those papers in detail.

Some more notes:

If you have for instance |P| = N/1000, then it's not wise to try all 1000 subsets P.
If there is a 42 cycle then you have a good chance of having one of its nodes in
one of the first 100 subsets.

It's going to be interesting to analyze whether a bunch of ASICs implementing this approach
is going to outperform the reference implementation on an FPGA plus a few hundreds DRAM chips, taking into account both fabrication costs and power usage of all components involved.

I remain hopeful that the constant factor overhead of this TMTO may preserve CC's ASIC resistance.
legendary
Activity: 990
Merit: 1108
http://www.cs.cmu.edu/~dga/crypto/cuckoo/analysis.pdf

Very hackish proof-of-concept:

http://www.cs.cmu.edu/~dga/crypto/partitioned.jl

I'm releasing this well in advance of what I would normally do for academic work because I think it's worth pushing the discussion of proof-of-work functions forward fast -- crypto moves too fast these days -- but please be aware that it's not done yet and requires a lot more polish.  The bottom line is as I suggested above:  It successfully TMTO's but still requires something like 1-3% of the "full" graph space, because that's what gets passed to the solver after edge trimming.  I don't think this is the final word in optimizing for cuckoo cycle -- I believe there are some further nifty optimizations possible, though I think this gets the first primary attack vector down.

John, I'd love your feedback on whether this is clear and/or needs help, or if you find some of the handwavy O(N) parts too vague to constitute a real threat.  I plan to clean it up more, but, as noted, I figured that it's better to swallow my perfectionism and push the crappy version out there faster to get the dialogue rolling, since I don't have coauthors on it to push it in the right direction. Smiley

Feedback appreciated!

Thanks, Dave. I appreciate the quick release. Will start going over it tonight.
dga
hero member
Activity: 737
Merit: 511
http://www.cs.cmu.edu/~dga/crypto/cuckoo/analysis.pdf

Very hackish proof-of-concept:

http://www.cs.cmu.edu/~dga/crypto/cuckoo/partitioned.jl

I'm releasing this well in advance of what I would normally do for academic work because I think it's worth pushing the discussion of proof-of-work functions forward fast -- crypto moves too fast these days -- but please be aware that it's not done yet and requires a lot more polish.  The bottom line is as I suggested above:  It successfully TMTO's but still requires something like 1-3% of the "full" graph space, because that's what gets passed to the solver after edge trimming.  I don't think this is the final word in optimizing for cuckoo cycle -- I believe there are some further nifty optimizations possible, though I think this gets the first primary attack vector down.

John, I'd love your feedback on whether this is clear and/or needs help, or if you find some of the handwavy O(N) parts too vague to constitute a real threat.  I plan to clean it up more, but, as noted, I figured that it's better to swallow my perfectionism and push the crappy version out there faster to get the dialogue rolling, since I don't have coauthors on it to push it in the right direction. Smiley

Feedback appreciated!

(The weakest part of the analysis, btw, is the growth rate of the dictionaries used to track the other live nodes.  With 7 iterations of edge trimming, for example, they actually grow slightly larger than the original node set in a partition, but less than 2x its size in some spot checks.  I need to think more carefully about how that affects some of the asymptotic factors.)

As an example of the latter, with a partition initially containing 16384 nodes:

Code:
609 live nodes at start of iteration 6
Size of hop 2 dictionary: 3140
Size of hop 3 dictionary: 4940
Size of hop 4 dictionary: 6841
Size of hop 5 dictionary: 8873

That's 23794 total dictionary entries, or 1.45x the initial partition size, and at iteration 7, it's grown to 26384, or 1.61x.  It's not an exponential explosion, so I'm not worried about it invalidating the major part of the result, but it's the place where I or someone else should focus some algorithmic attention to reduce.

update 2:  To run the julia code, install Julia, and then type:  
Code:
include("partitioned.jl")

It's glacially slow.  For the impatient, reduce the problem size from 2^27 to something smaller first, like 2^20, and the partition accordingly.  (Note:  I use "N" to mean the size of one half of the bipartite graph, whereas John's formulation uses it to include both, so n=2^27 is equivalent to john's 2^28)

Update 3:  Herp derp.  That dictionary growth was my stupidity - I forgot to not include the edge back to the node that caused the insertion in the first place, so it's got a little bit of accidental exponential growth.  I'll fix that tomorrow.  That should get it back to closer to the linear scaling I'd expected.

Update 4:  Fixed that above silliness with adding back inbound edges.  Much improved dictionary size, now in line with what it should have been:

Code:
647 live nodes at start of iteration 6
Size of hop 2 dictionary: 2803
Size of hop 3 dictionary: 3554
Size of hop 4 dictionary: 4152
Size of hop 5 dictionary: 4404

By the end of iteration 7, the sum of all dictionaries (for that run) was 15797, slightly less than the number of nodes in the original partition, so at least through 7 iterations, the space for dictionaries remains O(|P| log N).  Empirically speaking only, of course, since I haven't really done that analysis as it needs to be.
Julia code file on the website updated.
dga
hero member
Activity: 737
Merit: 511
Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(

The problem is there's nothing like Nature when it comes to cryptography (and certainly nothing like Pubmed). Then again, compared to medical sciences, this is an industry that is still very much in its infancy.

Well - there are lots of academic conferences.  I may toss it to one, but I suspect I'll throw it to ArXiv or crypto ePrint. 

The challenge that I see with it is that the CC paper wasn't published in a peer-reviewed spot, so it makes it harder to publish something following on to it.  The reason I jumped back on CC is because people in the cryptocurrency space are expressing a lot of interest in it, and that raises the importance of reviewing it, but there aren't too many academics who follow bitcointalk or think that when major devs for bitcoin are closely following something, it becomes important.  Ahh well.  It's fun stuff in any event. Smiley

donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(

The problem is there's nothing like Nature when it comes to cryptography (and certainly nothing like Pubmed). Then again, compared to medical sciences, this is an industry that is still very much in its infancy.
legendary
Activity: 990
Merit: 1108
- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The current code already trims the edges down to about 1.6%, in order to allow the cycle finding to run in the same memory footprint as the edge trimming. To run the cycle finding in only 2^14 words, you'd have to trim WAY more?!

Quote
The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

That would make the situation very similar to that of Momentum, where the linear TMTO applies to the original implementatoin using N=2^26 words, but not to your trimming version using N+2N/k bits (assuming it recomputes SHA hashes instead of storing them).

Quote
I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

For a linear TMTO of my original non-trimming Cuckoo Cycle implementation I will offer a $666 bounty.

Quote
... wish there were some venue to publish this stuff in. Smiley

You could do as I did and publish on the Cryptology ePrint Archive. That's good for exposure, but, lacking peer-review, not so good for academic merit:-(
dga
hero member
Activity: 737
Merit: 511
I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.

I see. That would indeed be very interesting, even if not a practical attack. But to be precise, is there a dependence on k in the hidden constant? Can you set k equal to log(N), or sqrt(N)?

k must be > log(n), or the constants lose out.  Anything >= log^2(n) is fine.

Obviously, because it's only acting on edge trimming, there's a lower bound on the size requirement determined by the cycles.  In essence, it boils down to #nodes-involved-in-paths-of-length-L * log(N), where L is the number of edge trimming steps you're willing to pay for.  From our prior discussion about that, that's in the tens of thousands of nodes for a N=2^26.

Here's a pretty concrete example:

- Using the equivalent of 7 iterations of edge trimming
- N= 2^28, E=2^27
- Using k=sqrt(N), requiring roughly 2^14 *words* of memory (so sqrt(N) * log(N))
- Processes in O(2^28 * 2^14) steps
- Reduces the graph down to about 2.5% of its original size. <-- requires 0.025N\logN words to represent, of course...

The way in which it's not a perfectly linear TMTO is that I have to go to a word representation of the graph, not the bitvector I introduced earlier.  It's a little more nuanced than that, but this is the core.  

I'm writing the proof of concept in Julia because I wanted to learn a new language, and so it's glacially slow because I'm a newbie to it.  I can discuss high-performance implementation strategies for it, of course, and I believe it's a pretty ASIC-friendly algorithm.

I'll write the whole thing up, share my PoC source code, and if you feel like throwing me some of the bounty, cool.

... wish there were some venue to publish this stuff in. Smiley
legendary
Activity: 990
Merit: 1108
I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.

I see. That would indeed be very interesting, even if not a practical attack. But to be precise, is there a dependence on k in the hidden constant? Can you set k equal to log(N), or sqrt(N)?
dga
hero member
Activity: 737
Merit: 511
Would you consider extending the applicability of your time/memory tradeoff bounty to a theoretical improvement to the asymptotic bounds for the time-memory tradeoff, with a low-speed demonstrator, but not an insanely tuned implementation, proving the feasibility of a sub-quadratic TMTO (but superlinear - i'm guessing some extra log(n) factor) for the edge pruning component of Cuckoo Cycle?

Are you asking for a bounty for using N/k bits and an o(k^2) slowdown?
The problem with asymptotic running times is that they're hard to verify:-(

I'd be happy to generalize the bounty as follows:

$1000/E for an open source implementation that uses at most N/k bits
while running up to 1.5*k^E times slower, for any k>=2 and E>=1.

Or is that still too strict for your taste?

Yes, but N/k bits with an O(k log N) slowdown.  Proof-of-concept implementation with no mind at all paid to efficiency, but showing clearly the attack vector and its resulting complexity.

In other words - close to linear, but not quite.

Pages:
Jump to: