Author

Topic: Pollard's kangaroo ECDLP solver - page 143. (Read 59389 times)

member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 09:20:02 AM
#47


@COBRAS
I release the code ASAP Wink

This will be great Luc.

If we reduce the number of calculations by reducing the dimension of the range in length and bits, we can get closer to a positive result, I think. So I think that processing calculation ranges is also very important Luc

Biggest Thanks. Smiley

sr. member
Activity: 462
Merit: 701
May 02, 2020, 08:48:32 AM
#46
With these data:
N = 2^40
m = 2^11
θ = 2^−5


still 2^21?

I have to try, I let you know...

@COBRAS
I release the code ASAP Wink
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 08:34:39 AM
#45
@Luc

Can you realise in your code modificarion of ranges ?


From:

Start-dfggsdgdsfh1111111
End- FFFFFABC112222233

To:
O

ABC888888

Etc...

Huh?



Biiigesr thank you.

member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 08:23:11 AM
#44
@Jean_Luc

In you examples:

0
FFFFFFFFFFFFFF
02E9F43F810784FF1E91D8BC7C4FF06BFEE935DA71D7350734C3472FE305FEF82A

[22.67 MKey/s][GPU 13.04 MKey/s][Count 2^29.06][Dead 0][28s][89.1MB]


Key# 0 Pub:  0x02E9F43F810784FF1E91D8BC7C4FF06BFEE935DA71D7350734C3472FE305FEF82A
       Priv: 0x378ABDEC51BC5D

Done: Total time 29s

I this your example in readme on github resalts was only 29 sec (!!!) this is more faster then use a ranges like this:

Start:49DCCFD96DC5DF56487436F5A1B18C4F5D34F65DDB48CB5E0000000000000000
Stop :49DCCFD96DC5DF56487436F5A1B18C4F5D34F65DDB48CB5EFFFFFFFFFFFFFFFF

And I think more because  no need to research for RANGES(!)

I think what PUBKEY can not be > PRIVKEY, so our ranges not from 0.....FFFFFFFFFFFFFF, but from 0....(FFFFFFFF minus UnknownX HuhHuh?(mayby minus PYBKEY Huh??))

Huh
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 08:19:01 AM
#43
I did a test using dp 0 (much slower):
The algorithm runs in 2^20.554 = ~1.47 sqrt(N) (1000 trials) as announced on the paper so it seems ok but I have to figure out why it is not the case for dp > 0. There is a relationship betwen dp and range size (m√2/3πθ) that I missed.
Will try to undersand, how to choose this m.

this is the difference ?

Quote
To detect small cycles we stored the previous 30 group elements in the walk
in the case N ≈ 2^34 (respectively, 30, 45 group elements for N ≈ 2^40, 2^48). Each
new step in the walk was compared with the previous 30 (respectively, 35, 45)
group elements visited. If this group element had already been visited then the
walk is in a cycle. We used a deterministic method to jump out of the cycle (using
a jump of distinct length from the other jumps used in the pseudorandom walk)
so that the pseudorandom walk as a whole remained deterministic
. The cost of
searching the list of previous group elements is not included in our experimental
results, but our count of group operations does include the “wasted” steps from
being in a cycle.

With these data:
N = 2^40
m = 2^11
θ = 2^−5


still 2^21?
sr. member
Activity: 462
Merit: 701
May 02, 2020, 08:08:43 AM
#42
I did a test using dp 0 (much slower):
The algorithm runs in 2^20.554 = ~1.47 sqrt(N) (1000 trials) as announced on the paper so it seems ok but I have to figure out why it is not the case for dp > 0. There is a relationship betwen dp and range size (m√2/3πθ) that I missed.
Will try to undersand, how to choose this m.

legendary
Activity: 1932
Merit: 2077
May 02, 2020, 04:24:36 AM
#41
This is a more update article:

https://www.ams.org/journals/mcom/2013-82-282/S0025-5718-2012-02641-X/S0025-5718-2012-02641-X.pdf

The number of bad collision is also link to the kangaroo density.

A note: probably half of your tames start from a even position (+24G, +1466G,... ) and the other half from a odd position (37G, 54365G, ....), and if you use the jumps: 1G, 2G, 4G, 8G,16G,... they are (except one) all even, then you have actually 2 groups that move in distinct areas. The same for the wild kangaroos.

It is like you worked in 2 subintervals, not good for the collision probability. At least you should concentrate all the wild kangaroos in a unique subinterval (all even or odd).

see this:


Quote
Four Kangaroo Method. We can make a further improvement. The starting points of W1 and W2,
namely x and −x, are integers of the same parity. Hence, to obtain wild-wild collisions more quickly one
should take walks whose jumps are all even length. However, now there is the possibility that the wild walks
will never collide with the tame walk. The solution is to have two tame kangaroos, T1 and T2, starting on
adjacent integers (one even and one odd). There is no possibility of a useless tame-tame collision, but there
is a chance of tame-wild collisions, as desired (though with only one of the tame kangaroos).
...
We now estimate the complexity of the method.
...

Hence, using 4 kangaroos gives an algorithm for the DLP in an interval whose heuristic average case
expected running time is (1.714 + o(1)).sqrt(N) group operations.
sr. member
Activity: 462
Merit: 701
May 02, 2020, 04:17:31 AM
#40
Yes
The above test was done with a stop&retry at 2^21.
I removed it and got 2^21.008 on 500 trials, so very similar.
As it is, it seems to converge to 2.sqrt(N) with a jump table of 16 positive random jumps and 16 negative random jumps.
The number of bad collision is also link to the kangaroo density.
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 03:44:02 AM
#39
Yes, i think this might be due to the average jump size which is too large for wild.
I will try other things and what you suggest.

Remember this too:

Quote
To guard against bad steps of type 1 van Oorschot and Wiener [22] set a maximum number of steps in
a walk. They choose this maximum to be 20/θ steps and show that the proportion of bad
points of type 1 is at most 5 × 10^−8

...
Steven D. Galbraith and Raminder S. Ruprai:

We terminated walks which ran for 5/θ steps without arriving at a distinguished point (the usual recommendation is 20/θ steps; see [22]). This will give
us a slightly worse running time than optimal.


https://www.math.auckland.ac.nz/~sgal018/crypto-book/ch14.pdf

Quote
Exercise 14.5.12.
The distance between −a and a is even, so a natural trick is to use jumps of even length.  Since we don’t know whether a is even or odd, if this is done we don’t know whether to start the tame kangaroo at g^u or g^u+1.  However, one can consider a variant of the algorithm with two wild kangaroos (one starting from h and one from h−1) and two tame kangaroos (one starting from g^u and one from g^u+1) and with jumps of even length.
This is called the four-kangaroo algorithm.
Explain why the correct choice for the mean step size is m = 0.375√2w and why the heuristic average-case expected number of group operations is approximately 1.714√w. Galbraith, Pollard and Ruprai [226] have combined the idea of Exercise 14.5.12 and the Gaudry-Schost algorithm (see Section 14.7) to obtain an algorithm for the discrete logarithm problem in an interval of length w that performs (1.660 + o(1))√w group operations.

Galbraith, Ruprai and Pollard [226] showed that one can improve the kangaroo method by exploiting inversion in the group. This research actually grew out of writing this chapter. Sometimes it pays to work slowly.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 03:42:08 AM
#38
P.s. v.1.4 is more stable - computation not downgrade with time and around stable level (in GPU this bee 500 Mkes/sec).

@Luc. Hardare is ready for any tests. Wink
sr. member
Activity: 462
Merit: 701
May 02, 2020, 03:36:48 AM
#37
Yes, i think this might be due to the average jump size which is too large for wild.
I will try other things and what you suggest.
I let you informed.
Thanks Wink
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 03:11:19 AM
#36
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.

Great!! Now under 2.sqrt(n)!

I would be nice too a comparison with the classic approach (no equivalence classes) with these modifications:

T' = [−2N/3, 2N/3 - (m/theta)]  
W' = [k − N, k − N/3 - m/theta] ∪ [k + N/3, k + N - m/theta]

To recap:
the standard 1.4 version takes 2.21.sqrt(n) and the new 'equivalence classes/symmetry' based version takes 2.sqrt(n), -15% ! There is room for improvement!
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 02:57:34 AM
#35
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.


:-)
Beeeeeegest thank you.
Br.
sr. member
Activity: 462
Merit: 701
May 02, 2020, 02:48:52 AM
#34
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 02:33:25 AM
#33
I think you are missing the point I am making.

You will find a collision if the solution is in one of the additional ranges opened up by the endomorphism.

First what is "our problem"?  Finding a that some point Q = kP has a discrete log with respect to some point P in a specific compact range of k -- [0, 2^80].  But why?    Instead you can solve a related problem: Find k for Q = kP where k's value is [+/-]1*[0, 2^78]*lambda^[0,2] in less time, even though that 'sparse' range has 1.5x the possible values.

Thanks. I admittedly didn't understand your previous post.

I think also if the interest is just in DL solving bragging rights or just the intellectual challenge in exploiting all the struture, the endomorphism is also interesting-- for that there isn't a particular problem structure so why not use the one that results in the fastest solver. Smiley  In the prime order group essentially all k values are equivalent, which is why you're able to shift the search around to look for that range wherever you want it in the 2^256 interval.

I'm trying to resolve the classic discrete logarithm problem via a 3-dimensional discrete logarithm problem ( https://www.math.auckland.ac.nz/~sgal018/RamCiren.pdf) using endomorphism, probably not a good idea.

If you resolve Q=a*P + b*P' + c*P'' with P'=lambda*P, P'' = lambda^2*P,
then you have that Q = k*P, where k=a + b*lambda + c*lambda^2

legendary
Activity: 1932
Merit: 2077
May 02, 2020, 02:04:33 AM
#32
still have lots of collision in same herd (using -d 0, it disable the distinguished point, so all collision and cycle are detected). The size of this table seems to no impact the bad collision number. I will read the ref[8] asap.

https://www.math.auckland.ac.nz/~sgal018/RamCiren.pdf

Hi, for your tuning's problems you can apply the following considerations to the classic version. This paper doesn't deal with equivalence classes.

Quote
“type 1” bad points: it could happen that some processor never finds a distinguished point
--> To guard against bad steps of type 1 van Oorschot and Wiener [22] set a maximum number of steps in
a walk
. They choose this maximum to be 20/θ steps and show that the proportion of bad
points of type 1 is at most 5 × 10^−8. If a processor takes this many steps without hitting a distinguished point then it restarts on a new random starting point.


Quote
“type 2” bad points:  it seems to be impossible to design pseudorandom walks which cover T or W uniformly but which do not sometimes overstep the boundaries. Steps outside the regions of interest cannot be included in our probabilistic analysis and so such steps are “wasted”. We call these “type 2” bad points.

We now give the main result of the paper, which is a version of the Gaudry-Schost algorithm which has a
faster running time. The key observation is that the running time of the Gaudry-Schost algorithm depends
on the size of the overlap between the tame and wild sets. If it is possible to make the size of this overlap
constant for all possible Q then the expected running time will be constant for all problem instances. We
achieve this by choosing walks which only cover certain subsets of T and W
.
Precisely, instead of using the sets T and W of the previous section

T = [-N,N]
W = [k-N,k+N]

we use

T0 = [−2N/3, 2N/3]  ⊆ T  
W0 = [k − N, k − N/3] ∪ [k + N/3, k + N]  ⊆ W (see Figure 1).

Running time: 2.05*sqrt(N)

Considering the DP too:

Quote
In the 1-dimensional case we use a standard pseudorandom walk as used by Pollard [15] and van Oorschot
and Wiener [22] i.e., small positive jumps in the exponent. The average distance in the exponent traveled
by a walk is m/θ, where m is the mean step size and θ is the probability that a point is distinguished. For
example, one may have m = c1√N and θ = c2/√N for some constants 0 < c1 < 1 and 1 < c2. Therefore, to
reduce the number of bad steps of type 2 we do not start walks within this distance of the right hand end
of the interval. In other words,

we do not start walks in the following subset of T0

[2N/3 - (m/theta), 2N/3]

we do not start walks in the following subset of W0

[k − N/3 - m/theta, k − N/3]  ⊆ W and [k + N -m/theta, k + N]  ⊆ W

Let m be the mean step size and max the maximum step size.
The probability that a walk has bad points of type 2 is at most
p =(20 max −m) / (2N*θ/3 − m)

For the values m = c1√N1, max = 2m and θ = c2/√N1 the value of p in equation is 39c1/(2c2/3 −
c1) which can be made arbitrarily small by taking c2 sufficiently large (i.e., by storing sufficiently many
distinguished points). Hence, even making the over-cautious assumption that all points are wasted if a walk
contains some bad points of type 2, the expected number of walks in the improved Gaudry-Schost algorithm
can be made arbitrarily close to the desired value when N is large. In practice it is reasonable to store at
least 2^30 distinguished points, which is quite sufficient to minimise the number of bad points of type 2.
We stress that the choices of m, max and θ are not completely arbitrary. Indeed, if one wants to bound
the number of bad points of type 2 as a certain proportion of the total number of steps (e.g., 1%) then for
a specific N there may be limits to how large θ and max can be. For smaller N we cannot have too small a
probability of an element being distinguished or too large a mean step size.
sr. member
Activity: 462
Merit: 701
May 02, 2020, 01:02:59 AM
#31
Many thanks to all of you for your interest Wink

I'm sorry but I didn't manage to make the algorithm in the paper working. I always get (at best) similar result than the classic version.
I tried to increase the random jump table size up 65536 (32768 positive random jumps and 32768 negative random  jumps) but still have lots of collision in same herd (using -d 0, it disable the distinguished point, so all collision and cycle are detected). The size of this table seems to no impact the bad collision number. I will read the ref[8] asap.

I tried several averages, sqrt(N) seems the best one.
I didn't tried yet to have different average length for Tame and Wild.

Try to implement endomorphism optimizations ASAP...

@MrFreeDragon
You need a large number of tests on key uniformly distributed in the range to get a correct estimation, at least 1000 trials.
Did you try with a jump table size of 32 or 64 ? NB_JUMP in GPUEngine.h:24 ?

I added the file I use for stats, it contains 1000 40 bits key uniformly distributed.
https://github.com/JeanLucPons/Kangaroo/blob/master/VC_CUDA8/in40_1000.txt
sr. member
Activity: 443
Merit: 350
May 01, 2020, 10:53:09 PM
#30
I tested v1.4beta, and it found the keys for me ~25% faster (in time) and with 3-5% faster speed compared with v1.3
GeForce GTX 1080 Ti 11Gb

I used 5 keys in 2^75 range and made 10 tests (5 keys in original range, and the same 5 keys in adjusted range shift to the beginning).

First 5 keys were found for 1:11:51 hours (14:22min per key - the same as predicted by program(!)), and the second 5 keys for 1:35:36 hours (19:06min per key). And the whole average was 16:45min per key. Compared with 20-22min with version 1.3. Prevous tests with version 1.3 and same 2^75 ranges are here: https://bitcointalksearch.org/topic/m.54302113

Code:
$ ./kangaroo -gpu -t 0 in75.txt
Kangaroo v1.4beta
Start:60B371918170B1249281CB73F3FA28F4747625CCC9E0BD8CCF264AE3CAC74D24
Stop :60B371918170B1249281CB73F3FA28F4747625CCC9E0C58CCF264AE3CAC74D23
Keys :5
Number of CPU thread: 0
Range width: 2^75
Jump Avg distance: 2^37.03
Number of kangaroos: 2^20.81
Suggested DP: 15
Expected operations: 2^38.65
Expected RAM: 1003.7MB
DP size: 15 [0xfffe000000000000]
GPU: GPU #0 GeForce GTX 1080 Ti (28x128 cores) Grid(56x256) (177.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.81 kangaroos in 10013.5ms
[500.05 MK/s][GPU 500.05 MK/s][Count 2^37.16][Dead 0][05:54 (Avg 14:21)][362.5MB] 
Key# 0 Pub:  0x0340CC040279AB29CD2201655D10D29A3986DC0BCA2BFCA8D0E497875CB70276E3
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C20D914856977B74D6DB
[500.11 MK/s][GPU 500.11 MK/s][Count 2^39.05][Dead 1][21:52 (Avg 14:21)][1331.5MB] 
Key# 1 Pub:  0x03B20790C29B02D2B60C71114212210B2077FF1455A3C8B88D2A3622A833E07575
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0BDCC1ECE1B5FADED5D1C
[500.05 MK/s][GPU 500.05 MK/s][Count 2^38.37][Dead 0][13:39 (Avg 14:21)][831.6MB] 
Key# 2 Pub:  0x0341F04243CAFFD267015DE7AFB7C9EA9A865C370A68F15D651AC3A99965D123E4
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C45EC3C6E480E8A55497
[500.06 MK/s][GPU 500.06 MK/s][Count 2^36.83][Dead 0][04:48 (Avg 14:21)][289.4MB] 
Key# 3 Pub:  0x03413EB0010FFE249909932574DF7EF46DC2FDF2BE98A95DD935EF25D668694607
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C4744736D7B854B92A35
[498.01 MK/s][GPU 498.01 MK/s][Count 2^39.23][Dead 4][24:59 (Avg 14:24)][1511.5MB] 
Key# 4 Pub:  0x026DC089D162150F80D761E89D97BF65BA6CC055B1E073B9F163CBC02F19EE8D99
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C56A49FB165E7A2AED37

Done: Total time 01:11:51

-------------------------------------------------------------------------------------------------------------

$ ./kangaroo -gpu -t 0 in75adj.txt
Kangaroo v1.4beta
Start:0
Stop :7FFFFFFFFFFFFFFFFFF
Keys :5
Number of CPU thread: 0
Range width: 2^75
Jump Avg distance: 2^37.03
Number of kangaroos: 2^20.81
Suggested DP: 15
Expected operations: 2^38.65
Expected RAM: 1003.7MB
DP size: 15 [0xfffe000000000000]
GPU: GPU #0 GeForce GTX 1080 Ti (28x128 cores) Grid(56x256) (177.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.81 kangaroos in 9870.3ms
[500.26 MK/s][GPU 500.26 MK/s][Count 2^37.94][Dead 0][10:17 (Avg 14:20)][619.1MB] 
Key# 0 Pub:  0x039A46F43EDB844F4F54A3FC3AD11F2AAC07F685E02CCB4885702975F43F91C8D2
       Priv: 0x480C2220BB3B0AD89B7
[497.96 MK/s][GPU 497.96 MK/s][Count 2^39.31][Dead 0][26:21 (Avg 14:24)][1594.2MB] 
Key# 1 Pub:  0x023CD0633B267FE3BF2BD43BFEB9101D462D85762447FCA7680D278EFA279E3B89
       Priv: 0x3F4FA7D07BE3260FF8
[498.21 MK/s][GPU 498.21 MK/s][Count 2^38.68][Dead 2][17:00 (Avg 14:24)][1033.2MB] 
Key# 2 Pub:  0x02FFB7FD0CDC140827726EF767A200789381B06E43B5607F7415B0902F6E654323
       Priv: 0x6D1F4A0999D1DDE0773
[498.03 MK/s][GPU 498.03 MK/s][Count 2^39.71][Dead 6][34:26 (Avg 14:24)][2096.6MB] 
Key# 3 Pub:  0x03875C234A56573A419FF84DE5F1788D02A363FA540A7E96EFFEF5901C595D1296
       Priv: 0x6E778108CD489F1DD11
[499.74 MK/s][GPU 499.74 MK/s][Count 2^37.33][Dead 0][06:46 (Avg 14:21)][408.5MB] 
Key# 4 Pub:  0x03352919782690011501EB26A6774E3CC55BA150EF91804F6A4BFC2F7005AF3BE2
       Priv: 0x7DD7AD4CB7AAF63A013

Done: Total time 01:35:36


But strange, I shifted the range (together with the public keys of course) to 0 (deducted start range) and expected that the 5 keys would be found faster or for the same time, but actually they were foound for a longer time.

If you shift the range [a,b] -> [-(b-a)/2, (b-a)/2], each point P in this interval has the property that -P is in interval too. No other operations are needed.

I beleive this shift could make the calcualtions faster... We easily implement the symmetry (!)

Jean_Luc, is it easy to implement such shift of the range? Or just make such shift optional...
The whole range is not length, but a wheel Smiley Let us spin this wheel and set the center of the range to the Zero point  Wink
staff
Activity: 4284
Merit: 8808
May 01, 2020, 06:53:29 PM
#29
But for our problem it is not suitable, we have to maximize the chance to get a collision inside an interval.
In a small interval [a*G, ..., b*G] (80 bit) probably you won't find 2 points with the same x or the same y, we don't work in the entire space.

We tried to move [a*G, ..., b*G] to [-(b-a)/2*G, +(b-a)/2*G]  because it is precisely the way to have all points with the opposite in the same subgroup. Then for each 'x' we have 2 points for sure.

I think you are missing the point I am making.

You will find a collision if the solution is in one of the additional ranges opened up by the endomorphism.

First what is "our problem"?  Finding a that some point Q = kP has a discrete log with respect to some point P in a specific compact range of k -- [0, 2^80].  But why?    Instead you can solve a related problem: Find k for Q = kP where k's value is [+/-]1*[0, 2^78]*lambda^[0,2] in less time, even though that 'sparse' range has 1.5x the possible values.

If the reason you are interested in DL is because someone has embedded a puzzle in a [0,2^80] range or something, then the ability to find solutions in the other range isn't interesting.

If, instead, the reason you are interested is because you have a cryptosystem that needs small range DL solving to recover data-- for example decrypting an elgammal encryption or recovering data from a covert nonce side-channel, or from a puzzle-maker that was aware of the endomorphism and used it in their puzle... then it is potentially very interesting.

I think also if the interest is just in DL solving bragging rights or just the intellectual challenge in exploiting all the struture, the endomorphism is also interesting-- for that there isn't a particular problem structure so why not use the one that results in the fastest solver. Smiley  In the prime order group essentially all k values are equivalent, which is why you're able to shift the search around to look for that range wherever you want it in the 2^256 interval.
 
It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)

You do not need any extra tables to search the additional ranges opened up by the endomorphisms.  That's why I pointed out point invariants that hold for all three--- x^3 or (is_even(y)?-1:1)*y.    If you do all your lookups using one of these canonicalized forms the table for six of the ranges is the same (and you'd have to check after finding a hit to determine which of the ranges the solution was actually in).
legendary
Activity: 1932
Merit: 2077
May 01, 2020, 05:51:13 PM
#28
What is the advantage of these 3 intervals? Actually they are all the same, just copies. The have the same scalar range, but different 3 types of points in every range. Also in order to implement it you need generate and store 3 tables of jumps (G, G' and G''). But what is the advantage of that 3 point ranges?

It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)

Good point, effectively they are just copies.  Roll Eyes
Jump to: