First with the link, it is a diagram stored on my Google Drive and pretty much covers everything, before I present some spiel about it:
https://drive.google.com/file/d/1kq9_b-n2n-5ZRYviX1YfULO7h2arSoaW/view?usp=sharingI have developed this after examining the Cuckoo Cycle, and beginning attempts to implement it in Golang. There is already a modified version written in Golang, but it uses much shorter edge bit sizes and cycle lengths (unfortunately not working because of the included siphash amd64 assembler code in it:
https://github.com/AidosKuneen/cuckoo )
Briefly, my assessment of the algorithm is that it is overly complex, the solutions will take up a lot of space and can be easily compressed by changing the algorithm, which is exactly what Hummingbird is.
Instead of requiring the actual solution coordinates being explicitly listed in the solution, it contains the nonce, a list of the indexes of the positions in the hash chain (hashes of hashes of hashes, etc) as a 16 bit value that refers to its position, which refers to 1+4x32 bit hashes, 4 for each head and tail of the vectors.
Being that this algorithm uses 32 bits for each coordinate in an Adjacency List format graph, it is likely that for a given compute capacity of the mining network will not require very long cycles to hit the block rate target.
In my analysis, Cuckoo Cycle, in principle, is a type of hash collision search that instead of searching for values with a maximum magnitude, it is just searching for a hash collision, specifically, a chain of hash collisions that do not occur more than twice before the cycle length target. The way that John Tromp designed it, is overly complex, in my view, and probably due to his intensive background in writing hashcash style algorithms in Cuda and OpenCL languages. His algorithm looks, to be precise, for subsets of hashes that collide, and requires a lot of bitwise operations, which, while not expensive, can be dispensed with entirely by using a scheme like this one I have designed.
Essentially, what I have done is make each graph coordinate set its' own individual hash result, and I picked out Murmur3 because of its speed. It is possible that a shorter hash length would make for longer cycles, and perhaps more precision for hitting block rate targets. But at the same time, the solver algorithm (see the link above) will pop out the solution in a fairly linear time based on the total amount of computation and memory access required to exactly find it, which, again as distinct from Cuckoo, has to perform the hash chain generation, sort it, and search it, whereas Hummingbird, by using 7 separate slices, searches and tabulates during each generation of an edge, so the minimum time may be shorter in some cases, and longer in others.
Hash algorithms by their nature will produce uneven hash collisions patterns based on the data, so some solutions will take less time than others. In theory every nonce can eventually be found to have all kinds of lengths given an arbitrary amount of time and data to sort through in order to discover it, but this will be capped at what adds up to an array with 128 bits with a 16 bit index, yielding a maximum of a smidge over 4gb of memory for the hash chain in the worst case, plus 128 bits for each candidate during search (so another 4gb) and again, another 4gb for the worst case for the solution/reject slice arrays.
So, in theory, the algorithm will top out around 12Gb of memory utilisation, which is part of my intention in this, to create an algorithm that requires intensive random access of memory, that usually will exceed the capacity of a current generation, economically viable video card. When this ceases to be the case (maybe another 3-5 years time) it can simply be expanded to use 32 bit array indexes in the solution, enabling a much larger search space to find longer cycles, longer hash lengths (64 bit or 128 bit) also will raise the worst case memory utilisation.
There is a small possibility that the specific hash used may have potential attacks, well, optimisations, that accelerate the process of discovering hash collisions, but by using the requirement of a hash chain, by this it is made more difficult because while you can find hash collisions, the finite field of a hash chain is distinct to the total space, a subset of the total hash collision field. In the case that an attack is found on a specific hash algorithm, it would not be difficult to switch to another hash algorithm in this event, and set back would be attackers a long way, especially if the new hash uses an entirely different category of hashing technique to the one that is compromised.
I suppose also using hash chain fields as the search space also will further weaken quantum attacks as another alternative way to accelerate the search, should qbit based computation become anywhere near competitive to silicon.