Author

Topic: Pollard's kangaroo ECDLP solver - page 143. (Read 58747 times)

sr. member
Activity: 462
Merit: 696
May 02, 2020, 03:36:48 AM
#37
Yes, i think this might be due to the average jump size which is too large for wild.
I will try other things and what you suggest.
I let you informed.
Thanks Wink
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 03:11:19 AM
#36
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.

Great!! Now under 2.sqrt(n)!

I would be nice too a comparison with the classic approach (no equivalence classes) with these modifications:

T' = [−2N/3, 2N/3 - (m/theta)]  
W' = [k − N, k − N/3 - m/theta] ∪ [k + N/3, k + N - m/theta]

To recap:
the standard 1.4 version takes 2.21.sqrt(n) and the new 'equivalence classes/symmetry' based version takes 2.sqrt(n), -15% ! There is room for improvement!
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 02, 2020, 02:57:34 AM
#35
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.


:-)
Beeeeeegest thank you.
Br.
sr. member
Activity: 462
Merit: 696
May 02, 2020, 02:48:52 AM
#34
Good news !
By changing the spreading of wild kangaroo to -N/8 N/8 the it seems to improve things. (using the Steven D. Galbraith?and Raminder S. method).
I got an average of 2^20.997 ~ 2.sqrt(n) on 1000 trials.
legendary
Activity: 1932
Merit: 2077
May 02, 2020, 02:33:25 AM
#33
I think you are missing the point I am making.

You will find a collision if the solution is in one of the additional ranges opened up by the endomorphism.

First what is "our problem"?  Finding a that some point Q = kP has a discrete log with respect to some point P in a specific compact range of k -- [0, 2^80].  But why?    Instead you can solve a related problem: Find k for Q = kP where k's value is [+/-]1*[0, 2^78]*lambda^[0,2] in less time, even though that 'sparse' range has 1.5x the possible values.

Thanks. I admittedly didn't understand your previous post.

I think also if the interest is just in DL solving bragging rights or just the intellectual challenge in exploiting all the struture, the endomorphism is also interesting-- for that there isn't a particular problem structure so why not use the one that results in the fastest solver. Smiley  In the prime order group essentially all k values are equivalent, which is why you're able to shift the search around to look for that range wherever you want it in the 2^256 interval.

I'm trying to resolve the classic discrete logarithm problem via a 3-dimensional discrete logarithm problem ( https://www.math.auckland.ac.nz/~sgal018/RamCiren.pdf) using endomorphism, probably not a good idea.

If you resolve Q=a*P + b*P' + c*P'' with P'=lambda*P, P'' = lambda^2*P,
then you have that Q = k*P, where k=a + b*lambda + c*lambda^2

legendary
Activity: 1932
Merit: 2077
May 02, 2020, 02:04:33 AM
#32
still have lots of collision in same herd (using -d 0, it disable the distinguished point, so all collision and cycle are detected). The size of this table seems to no impact the bad collision number. I will read the ref[8] asap.

https://www.math.auckland.ac.nz/~sgal018/RamCiren.pdf

Hi, for your tuning's problems you can apply the following considerations to the classic version. This paper doesn't deal with equivalence classes.

Quote
“type 1” bad points: it could happen that some processor never finds a distinguished point
--> To guard against bad steps of type 1 van Oorschot and Wiener [22] set a maximum number of steps in
a walk
. They choose this maximum to be 20/θ steps and show that the proportion of bad
points of type 1 is at most 5 × 10^−8. If a processor takes this many steps without hitting a distinguished point then it restarts on a new random starting point.


Quote
“type 2” bad points:  it seems to be impossible to design pseudorandom walks which cover T or W uniformly but which do not sometimes overstep the boundaries. Steps outside the regions of interest cannot be included in our probabilistic analysis and so such steps are “wasted”. We call these “type 2” bad points.

We now give the main result of the paper, which is a version of the Gaudry-Schost algorithm which has a
faster running time. The key observation is that the running time of the Gaudry-Schost algorithm depends
on the size of the overlap between the tame and wild sets. If it is possible to make the size of this overlap
constant for all possible Q then the expected running time will be constant for all problem instances. We
achieve this by choosing walks which only cover certain subsets of T and W
.
Precisely, instead of using the sets T and W of the previous section

T = [-N,N]
W = [k-N,k+N]

we use

T0 = [−2N/3, 2N/3]  ⊆ T  
W0 = [k − N, k − N/3] ∪ [k + N/3, k + N]  ⊆ W (see Figure 1).

Running time: 2.05*sqrt(N)

Considering the DP too:

Quote
In the 1-dimensional case we use a standard pseudorandom walk as used by Pollard [15] and van Oorschot
and Wiener [22] i.e., small positive jumps in the exponent. The average distance in the exponent traveled
by a walk is m/θ, where m is the mean step size and θ is the probability that a point is distinguished. For
example, one may have m = c1√N and θ = c2/√N for some constants 0 < c1 < 1 and 1 < c2. Therefore, to
reduce the number of bad steps of type 2 we do not start walks within this distance of the right hand end
of the interval. In other words,

we do not start walks in the following subset of T0

[2N/3 - (m/theta), 2N/3]

we do not start walks in the following subset of W0

[k − N/3 - m/theta, k − N/3]  ⊆ W and [k + N -m/theta, k + N]  ⊆ W

Let m be the mean step size and max the maximum step size.
The probability that a walk has bad points of type 2 is at most
p =(20 max −m) / (2N*θ/3 − m)

For the values m = c1√N1, max = 2m and θ = c2/√N1 the value of p in equation is 39c1/(2c2/3 −
c1) which can be made arbitrarily small by taking c2 sufficiently large (i.e., by storing sufficiently many
distinguished points). Hence, even making the over-cautious assumption that all points are wasted if a walk
contains some bad points of type 2, the expected number of walks in the improved Gaudry-Schost algorithm
can be made arbitrarily close to the desired value when N is large. In practice it is reasonable to store at
least 2^30 distinguished points, which is quite sufficient to minimise the number of bad points of type 2.
We stress that the choices of m, max and θ are not completely arbitrary. Indeed, if one wants to bound
the number of bad points of type 2 as a certain proportion of the total number of steps (e.g., 1%) then for
a specific N there may be limits to how large θ and max can be. For smaller N we cannot have too small a
probability of an element being distinguished or too large a mean step size.
sr. member
Activity: 462
Merit: 696
May 02, 2020, 01:02:59 AM
#31
Many thanks to all of you for your interest Wink

I'm sorry but I didn't manage to make the algorithm in the paper working. I always get (at best) similar result than the classic version.
I tried to increase the random jump table size up 65536 (32768 positive random jumps and 32768 negative random  jumps) but still have lots of collision in same herd (using -d 0, it disable the distinguished point, so all collision and cycle are detected). The size of this table seems to no impact the bad collision number. I will read the ref[8] asap.

I tried several averages, sqrt(N) seems the best one.
I didn't tried yet to have different average length for Tame and Wild.

Try to implement endomorphism optimizations ASAP...

@MrFreeDragon
You need a large number of tests on key uniformly distributed in the range to get a correct estimation, at least 1000 trials.
Did you try with a jump table size of 32 or 64 ? NB_JUMP in GPUEngine.h:24 ?

I added the file I use for stats, it contains 1000 40 bits key uniformly distributed.
https://github.com/JeanLucPons/Kangaroo/blob/master/VC_CUDA8/in40_1000.txt
sr. member
Activity: 443
Merit: 350
May 01, 2020, 10:53:09 PM
#30
I tested v1.4beta, and it found the keys for me ~25% faster (in time) and with 3-5% faster speed compared with v1.3
GeForce GTX 1080 Ti 11Gb

I used 5 keys in 2^75 range and made 10 tests (5 keys in original range, and the same 5 keys in adjusted range shift to the beginning).

First 5 keys were found for 1:11:51 hours (14:22min per key - the same as predicted by program(!)), and the second 5 keys for 1:35:36 hours (19:06min per key). And the whole average was 16:45min per key. Compared with 20-22min with version 1.3. Prevous tests with version 1.3 and same 2^75 ranges are here: https://bitcointalksearch.org/topic/m.54302113

Code:
$ ./kangaroo -gpu -t 0 in75.txt
Kangaroo v1.4beta
Start:60B371918170B1249281CB73F3FA28F4747625CCC9E0BD8CCF264AE3CAC74D24
Stop :60B371918170B1249281CB73F3FA28F4747625CCC9E0C58CCF264AE3CAC74D23
Keys :5
Number of CPU thread: 0
Range width: 2^75
Jump Avg distance: 2^37.03
Number of kangaroos: 2^20.81
Suggested DP: 15
Expected operations: 2^38.65
Expected RAM: 1003.7MB
DP size: 15 [0xfffe000000000000]
GPU: GPU #0 GeForce GTX 1080 Ti (28x128 cores) Grid(56x256) (177.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.81 kangaroos in 10013.5ms
[500.05 MK/s][GPU 500.05 MK/s][Count 2^37.16][Dead 0][05:54 (Avg 14:21)][362.5MB] 
Key# 0 Pub:  0x0340CC040279AB29CD2201655D10D29A3986DC0BCA2BFCA8D0E497875CB70276E3
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C20D914856977B74D6DB
[500.11 MK/s][GPU 500.11 MK/s][Count 2^39.05][Dead 1][21:52 (Avg 14:21)][1331.5MB] 
Key# 1 Pub:  0x03B20790C29B02D2B60C71114212210B2077FF1455A3C8B88D2A3622A833E07575
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0BDCC1ECE1B5FADED5D1C
[500.05 MK/s][GPU 500.05 MK/s][Count 2^38.37][Dead 0][13:39 (Avg 14:21)][831.6MB] 
Key# 2 Pub:  0x0341F04243CAFFD267015DE7AFB7C9EA9A865C370A68F15D651AC3A99965D123E4
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C45EC3C6E480E8A55497
[500.06 MK/s][GPU 500.06 MK/s][Count 2^36.83][Dead 0][04:48 (Avg 14:21)][289.4MB] 
Key# 3 Pub:  0x03413EB0010FFE249909932574DF7EF46DC2FDF2BE98A95DD935EF25D668694607
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C4744736D7B854B92A35
[498.01 MK/s][GPU 498.01 MK/s][Count 2^39.23][Dead 4][24:59 (Avg 14:24)][1511.5MB] 
Key# 4 Pub:  0x026DC089D162150F80D761E89D97BF65BA6CC055B1E073B9F163CBC02F19EE8D99
       Priv: 0x60B371918170B1249281CB73F3FA28F4747625CCC9E0C56A49FB165E7A2AED37

Done: Total time 01:11:51

-------------------------------------------------------------------------------------------------------------

$ ./kangaroo -gpu -t 0 in75adj.txt
Kangaroo v1.4beta
Start:0
Stop :7FFFFFFFFFFFFFFFFFF
Keys :5
Number of CPU thread: 0
Range width: 2^75
Jump Avg distance: 2^37.03
Number of kangaroos: 2^20.81
Suggested DP: 15
Expected operations: 2^38.65
Expected RAM: 1003.7MB
DP size: 15 [0xfffe000000000000]
GPU: GPU #0 GeForce GTX 1080 Ti (28x128 cores) Grid(56x256) (177.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.81 kangaroos in 9870.3ms
[500.26 MK/s][GPU 500.26 MK/s][Count 2^37.94][Dead 0][10:17 (Avg 14:20)][619.1MB] 
Key# 0 Pub:  0x039A46F43EDB844F4F54A3FC3AD11F2AAC07F685E02CCB4885702975F43F91C8D2
       Priv: 0x480C2220BB3B0AD89B7
[497.96 MK/s][GPU 497.96 MK/s][Count 2^39.31][Dead 0][26:21 (Avg 14:24)][1594.2MB] 
Key# 1 Pub:  0x023CD0633B267FE3BF2BD43BFEB9101D462D85762447FCA7680D278EFA279E3B89
       Priv: 0x3F4FA7D07BE3260FF8
[498.21 MK/s][GPU 498.21 MK/s][Count 2^38.68][Dead 2][17:00 (Avg 14:24)][1033.2MB] 
Key# 2 Pub:  0x02FFB7FD0CDC140827726EF767A200789381B06E43B5607F7415B0902F6E654323
       Priv: 0x6D1F4A0999D1DDE0773
[498.03 MK/s][GPU 498.03 MK/s][Count 2^39.71][Dead 6][34:26 (Avg 14:24)][2096.6MB] 
Key# 3 Pub:  0x03875C234A56573A419FF84DE5F1788D02A363FA540A7E96EFFEF5901C595D1296
       Priv: 0x6E778108CD489F1DD11
[499.74 MK/s][GPU 499.74 MK/s][Count 2^37.33][Dead 0][06:46 (Avg 14:21)][408.5MB] 
Key# 4 Pub:  0x03352919782690011501EB26A6774E3CC55BA150EF91804F6A4BFC2F7005AF3BE2
       Priv: 0x7DD7AD4CB7AAF63A013

Done: Total time 01:35:36


But strange, I shifted the range (together with the public keys of course) to 0 (deducted start range) and expected that the 5 keys would be found faster or for the same time, but actually they were foound for a longer time.

If you shift the range [a,b] -> [-(b-a)/2, (b-a)/2], each point P in this interval has the property that -P is in interval too. No other operations are needed.

I beleive this shift could make the calcualtions faster... We easily implement the symmetry (!)

Jean_Luc, is it easy to implement such shift of the range? Or just make such shift optional...
The whole range is not length, but a wheel Smiley Let us spin this wheel and set the center of the range to the Zero point  Wink
staff
Activity: 4242
Merit: 8672
May 01, 2020, 06:53:29 PM
#29
But for our problem it is not suitable, we have to maximize the chance to get a collision inside an interval.
In a small interval [a*G, ..., b*G] (80 bit) probably you won't find 2 points with the same x or the same y, we don't work in the entire space.

We tried to move [a*G, ..., b*G] to [-(b-a)/2*G, +(b-a)/2*G]  because it is precisely the way to have all points with the opposite in the same subgroup. Then for each 'x' we have 2 points for sure.

I think you are missing the point I am making.

You will find a collision if the solution is in one of the additional ranges opened up by the endomorphism.

First what is "our problem"?  Finding a that some point Q = kP has a discrete log with respect to some point P in a specific compact range of k -- [0, 2^80].  But why?    Instead you can solve a related problem: Find k for Q = kP where k's value is [+/-]1*[0, 2^78]*lambda^[0,2] in less time, even though that 'sparse' range has 1.5x the possible values.

If the reason you are interested in DL is because someone has embedded a puzzle in a [0,2^80] range or something, then the ability to find solutions in the other range isn't interesting.

If, instead, the reason you are interested is because you have a cryptosystem that needs small range DL solving to recover data-- for example decrypting an elgammal encryption or recovering data from a covert nonce side-channel, or from a puzzle-maker that was aware of the endomorphism and used it in their puzle... then it is potentially very interesting.

I think also if the interest is just in DL solving bragging rights or just the intellectual challenge in exploiting all the struture, the endomorphism is also interesting-- for that there isn't a particular problem structure so why not use the one that results in the fastest solver. Smiley  In the prime order group essentially all k values are equivalent, which is why you're able to shift the search around to look for that range wherever you want it in the 2^256 interval.
 
It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)

You do not need any extra tables to search the additional ranges opened up by the endomorphisms.  That's why I pointed out point invariants that hold for all three--- x^3 or (is_even(y)?-1:1)*y.    If you do all your lookups using one of these canonicalized forms the table for six of the ranges is the same (and you'd have to check after finding a hit to determine which of the ranges the solution was actually in).
legendary
Activity: 1932
Merit: 2077
May 01, 2020, 05:51:13 PM
#28
What is the advantage of these 3 intervals? Actually they are all the same, just copies. The have the same scalar range, but different 3 types of points in every range. Also in order to implement it you need generate and store 3 tables of jumps (G, G' and G''). But what is the advantage of that 3 point ranges?

It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)

Good point, effectively they are just copies.  Roll Eyes
sr. member
Activity: 443
Merit: 350
May 01, 2020, 05:14:33 PM
#27
-snip-
If you work in 3 spaces of points and you change the generator and the jumps accordingly:

           point                                scalar                                                                             jumps
[a*G,   (a+1)*G, ....,  b*G]     <-> [a,b]   interval 1                                         (1*G, 2*G, 4*G,...., 2^(n-1)*G)
[a*G',  (a+1)*G', ....,  b*G']    <-> [a,b]   interval 2  where G'=lambda*G      (1*G', 2*G', 4*G',...., 2^(n-1)*G')
[a*G'', (a+1)*G'', ...., b*G'']   <-> [a,b]   interval 3  where G''=lambda*G    (1*G'', 2*G'', 4*G'',...., 2^(n-1)*G'')

you will have three intervals of consecutive points. In this way any movement in the interval 1 has its own "copy" in the other 2 intervals.

What is the advantage of these 3 intervals? Actually they are all the same, just copies. The have the same scalar range, but different 3 types of points in every range. Also in order to implement it you need generate and store 3 tables of jumps (G, G' and G''). But what is the advantage of that 3 point ranges?

It looks the same if you put 3 times more angaroos at the same range (with G), but for this method you need less jump tables (only one jump table)
legendary
Activity: 1932
Merit: 2077
May 01, 2020, 04:52:24 PM
#26
If you cube the x coordinate you will extinguish the group of the endomorphism-- all six keys P, lambda*P, lambda^2*P, -P, lambda*-P, and lambda^2*-P all share the same x^3 coordinate, so if the x^3 was used to select the next position, they'd all collapse to the same chain.

Great idea!

But for our problem it is not suitable, we have to maximize the chance to get a collision inside an interval.
In a small interval [a*G, ..., b*G] (80 bit) probably you won't find 2 points with the same x or the same y, we don't work in the entire space.

We tried to move [a*G, ..., b*G] to [-(b-a)/2*G, +(b-a)/2*G]  because it is precisely the way to have all points with the opposite in the same subgroup. Then for each 'x' we have 2 points for sure.


The endomorphism might be less interesting though, because the cases you are selecting the DL is known to lie in a 'small' range from a specific base point, and the two endomorphism generated points do not.

If you work in 3 spaces of points and you change the generator and the jumps accordingly:

           point                                scalar                                                                             jumps
[a*G,   (a+1)*G, ....,  b*G]     <-> [a,b]   interval 1                                         (1*G, 2*G, 4*G,...., 2^(n-1)*G)
[a*G',  (a+1)*G', ....,  b*G']    <-> [a,b]   interval 2  where G'=lambda*G      (1*G', 2*G', 4*G',...., 2^(n-1)*G')
[a*G'', (a+1)*G'', ...., b*G'']   <-> [a,b]   interval 3  where G''=lambda*G    (1*G'', 2*G'', 4*G'',...., 2^(n-1)*G'')

you will have three intervals of consecutive points. In this way any movement in the interval 1 has its own "copy" in the other 2 intervals.
staff
Activity: 4242
Merit: 8672
May 01, 2020, 04:07:00 PM
#25
If you cube the x coordinate you will extinguish the group of the endomorphism-- all six keys P, lambda*P, lambda^2*P, -P, lambda*-P, and lambda^2*-P all share the same x^3 coordinate, so if the x^3 was used to select the next position, they'd all collapse to the same chain.

Alternatively, you could choose y or -y whichever is an even number (only one will be), because negating y MAY be faster than cubing x. I say may because you have to compute y and I think an optimal addition ladder will usually not need y for all points-- only for points which are non-termial.

The endomorphism might be less interesting though, because the cases you are selecting the DL is known to lie in a 'small' range from a specific base point, and the two endomorphism generated points do not.

That said, I've had applications-- e.g. using nonce's as a sidechannel -- where you can choose the range, and  [small]*lambda^[0,1,2]  would be a viable option.


legendary
Activity: 1932
Merit: 2077
May 01, 2020, 03:52:11 PM
#24
But with both signs (+-) or not?

Could you try with an average of 2^21 - 2^22 instead?
u try with an average of 2^21 - 2^22 instead?

Yes at the beginning I used the y sign to select positive or negative jump then I tried an otehr set of random negative jumps without success.
It seems that this "bownian motion" is rather tricky to tune.

Tomorrow, I'll try other things...


Only negative jumps? I would try with positive/negative jumps and a greater average (my sensation)
To me it looks promising. 1.50sqrt(N) would be amazing!
Another thing to pay attention to is: when we translate our interval, we have only 2^(n-1) x-coordinates, and then half DP. For the tuning it's like we worked in a 2^(n-1) space.


I was thinking another thing,
if you want to overcome/avoid the brownian-motion tuning problem,

go back and put together

1) the linearity (and therefore the simplicity) of the classic approach (that you used in the release 1.3)
2) the power of the symmetry

Exploiting the symmetry means that we use equivalence classes to increase the probability of a collision (same number of tries, half 'points'); in our case, given a point P(x,y), 'x' decides the width of the jump and 'y' only the direction.
In the secp256k1 there are 2 points that share the same 'x', (x,y) and (x,-y).

In a interval of consecutive points, all x-coordinates are different, but if we move the interval like we did it becomes symmetric.

The drawback of our approach is that the kangaroo, instead of going in only one direction, keep going back and forth, creating many loops. Besides, not only we don't know where a wild kangaroo is (obviously we don't know its position-private key), but we don't control even the direction of its motion (and often it comes out from the interval)


But this curve has another symmetry that respects the linearity (unlike the negative map that overturns the interval and the kangaroo algorithm): endomorphism.

There are 3 points with the same 'y' , that means that this curve has about 2^256 points and
 
1) 2^256 / 2 different x-coordinates
2) 2^256 / 3 different y-coordinates

Why don't we use the symmetry that mantains the linearity of the interval?

We can work on the y-coordinates, and a jump should be depend on 'y' and not on 'x'.
We set equivalence classes where [P] = {P, Q, R} if and only if  yP = yQ = yR

What is the relation between the private key of these points P, Q, R?

If P=k*G, then Q=k*lambda*G and R =k*lambda^2*G (endomorphism)
and P=(x,y) -> Q=(beta*x, y) -> R=(beta^2*x,y)

observation:

if P=k*G, i.e. k is the private key of P respect of the generator G, then
lambda*P = k*lambda*G, i.e. the private key of P'=lambda*P respect of the generator G'=lambda*G is always k.

In other words, if we know that k lies in [a,b], resolving the problem P=k*G is equivalent to resolve the problem P'=k*G'
The interval is the same (from the scalar's point of view) but there are actually 3 different intervals of points. They are all consecutive from a different generator's point of view:

           point                                scalar
[a*G,   (a+1)*G, ....,  b*G]     <-> [a,b]   interval 1
[a*G',  (a+1)*G', ....,  b*G']    <-> [a,b]   interval 2  where G'=lambda*G
[a*G'', (a+1)*G'', ...., b*G'']   <-> [a,b]   interval 3  where G''=lambda*G

We could work on  3 "spaces" simultaneously :

P=k*G, with all points with generator G   -> interval 1     jumps (1*G, 2*G, ,,,2^(n-1)*G)

P'=k*G' with all points with generator G' -> interval 2     jumps(lambda*G, lambda*2*G, .... lambda*2^(n-1)*G)

P''=k*G'' with all points with generator G'' -> interval 3  jumps(lambda^2*G, lambda^2*2*G, .... lambda^2*2^(n-1)*G)

if the movements of a kangaroo depends only by the y-coordinates, that means that if a kangaroo reach a point T in the first interval and another kangaroo reachs a point W in the second interval with the same y, we will have a collision!
From that point they will walk together along the same path (across the same equivalence classes) until they reach a DP (in this case a y-coordinate with some zero bits), we detect this collision.

EDIT:
We work in this way in a space of 3*2^n points (3 intervals), but with only 2^n y-coordinates, then we should have a collision with a higher probability.  WRONG
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 01, 2020, 03:51:19 PM
#23

Tomorrow, I'll try other things...


This will be great, Luc. Biiiiiiiiiiiiiiiiiiig thanks.
sr. member
Activity: 462
Merit: 696
May 01, 2020, 03:29:21 PM
#22
They also showed in practice that the total number of operations was not 1.36sqrt(N), but 1.46-1.49sqrt(N).

Yes this small epsilon of ~0.1 (as they call it) is again a strange thing without lots of explanation. They say that it is an overload due the the "failure of random walks" where again we do not have lots of clear informations, I have to read the reference [8]. Last but not least they make test on a very small number of experiments so the error is large...
I have some doubt on few on their calculations too...

But with both signs (+-) or not?

Could you try with an average of 2^21 - 2^22 instead?
u try with an average of 2^21 - 2^22 instead?

Yes at the beginning I used the y sign to select positive or negative jump then I tried an otehr set of random negative jumps without success.
It seems that this "bownian motion" is rather tricky to tune.

Tomorrow, I'll try other things...
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 01, 2020, 03:14:08 PM
#21
Luc, then will be release avaliable ?


Big thank you.

Br.
legendary
Activity: 1932
Merit: 2077
May 01, 2020, 03:06:41 PM
#20
-snip-
We do have fast inversions. Inversion in additive group means: -P, and -P is pratically for free.
-snip-

For the whole group with range [0, order] yes it is free to make -P from P. But for the subgroup within a pecified range you need to make scalar operations.

If you shift the range [a,b] -> [-(b-a)/2, (b-a)/2], each point P in this interval has the property that -P is in interval too. No other operations are needed.
sr. member
Activity: 443
Merit: 350
May 01, 2020, 02:04:29 PM
#19
-snip-
We do have fast inversions. Inversion in additive group means: -P, and -P is pratically for free.
-snip-

For the whole group with range [0, order] yes it is free to make -P from P. But for the subgroup within a pecified range you need to make scalar operations.
legendary
Activity: 1932
Merit: 2077
May 01, 2020, 01:24:58 PM
#18
-snip-
I think there's a paper about this already:
https://www.iacr.org/archive/pkc2010/60560372/60560372.pdf

You probably need to detect frutiless cycles with this method (stuck kangaroos in a loop without distinguished points).

Thank for this reading.
In abstract it was noted that the method "to solve the DLP in an interval of size N with heuristic average case expected running time of close to 1.36√N group operations for groups with fast inversion".

We do not have fast inversions. They also showed in practice that the total number of operations was not 1.36sqrt(N), but 1.46-1.49sqrt(N).

We do have fast inversions. Inversion in additive group means: -P, and -P is pratically for free.

Quote
we  show  how  to  speed  up  the  Gaudry-Schost  method in  groups  with  fast  inversion  (such  as  elliptic  curve ... )
Here fast inversion means that computing u^−1 for any u in the group is much faster  than  a  general  group  operation.

Jump to: