Author

Topic: Pollard's kangaroo ECDLP solver - page 125. (Read 58667 times)

sr. member
Activity: 617
Merit: 312
May 26, 2020, 09:30:25 AM
I recall choose nbKangaroo * 2^dpbit the smallest possible comparing 2^55.5
So the best choice in this case was using DP=26 ?
Code:
range 2^ 109.0
expected OP 2^ 55.5
DPbit: 26
total GPU: 383
Kangaroos per GPU 2^ 21.09
totalkangaroo 2^ 29.671200581924957
Nkangaroo*2^DPbit = 2^ 55.67120058192496
Edit: but when for ex. we want create pool to do job toghether, we don`t know how many kangaroos will be totaly, because we don`t know how many GPU will be used.
How calculate correct DP in that case?
sr. member
Activity: 462
Merit: 696
May 26, 2020, 08:59:01 AM
I like the race between HardWareCollector and zielar Smiley

To choose the DP mask, you first have to calculate your kangaroo number.

460000Mkeys with 2080 Ti (default setting) at ~1.2GKs
=> 383 boards * 2^21.09 = 2^29.6 kangaroo
DP28 => overhead = 2^29.6 * 2^28 = 2^57.6

The overhead here is greater than the expected number of iteration.
As said by HC, DP28 is too large for so much kangaroo and 109bit range.

I recall choose nbKangaroo * 2^dpbit the smallest possible comparing 2^55.5

Edit:
And avoid restarting clients, each time you restart a client, you add more kangaroos and stop others (you break paths)

Edit 2:
The suggested DP is a bad approximation and depend on number of kangaroo, so it differs from a configuration to an other.
full member
Activity: 281
Merit: 114
May 26, 2020, 08:57:17 AM
Quote
In  all of the experiments that I’ve done with Jean_Luc’s software, I don’t recall going that long (~4.6(sqrt(N)) without some sort of a collision with a properly chosen DP mask, something might be wrong there. But what I find even more interesting is your choice of a very high 28-bit mask, how did you decide that this was the proper choice
DP = 28 suggested the application, so I did as suggested, because I was not informed enough to determine this value myself at a lower level


Quote
Dude, with your insane power you just have to wait less than few days to scan the full range of keys, what do you want more ?

clear assurance that continuing to work in the current mode will result in a key result
sr. member
Activity: 462
Merit: 696
May 26, 2020, 08:40:02 AM
Yes I switched to Visual Studio 2019 and Cuda 10.2.
Unfortunately I can't build Cuda 10 in the next few days.
sr. member
Activity: 617
Merit: 312
May 26, 2020, 08:36:38 AM
I published the 1.6.

GPUEngine: CudaGetDeviceCount CUDA driver version is insufficient for CUDA runtime version
compiled with 10.2?
sr. member
Activity: 661
Merit: 250
May 26, 2020, 08:36:17 AM
I published the 1.6.


Many thanks.
I will update clients and server after returning in about two hours because I'm out of reach.  I am asking for tips and suggestions on how best to solve it?  Which startup command should I use for each client and which one for the server?
Dude, with your insane power you just have to wait less than few days to scan the full range of keys, what do you want more ?
member
Activity: 144
Merit: 10
May 26, 2020, 08:35:18 AM
Quote
How many jump do you have in total at the moment for #110?
This is my -winfo from fully merged job to this time:


Quote
could you share your work file?

I don't see a problem ... As soon as I start working on #115 :-)

In all of the experiments that I’ve done with Jean_Luc’s software, I don’t recall going that long (~4.6(sqrt(N)) without some sort of a collision with a properly chosen DP mask, something might be wrong there. But what I find even more interesting is your choice of a very high 28-bit mask, how did you decide that this was the proper choice for your setup?

Out of courtesy, I will give you until this coming Sunday to finish. If not, I will enter the race with 128 RTX 2080 Tis and 2TB of RAM (22-bit DP mask), and with that setup, my worst case is five days based on experimentation with my own implementation with distributed hash tables.

I wish you the best of luck and I hope that you succeed, we can learn a lot from your data on how to choose the optimal DP point mask.
full member
Activity: 281
Merit: 114
May 26, 2020, 08:25:47 AM
I published the 1.6.


Many thanks.
I will update clients and server after returning in about two hours because I'm out of reach.  I am asking for tips and suggestions on how best to solve it?  Which startup command should I use for each client and which one for the server?
sr. member
Activity: 462
Merit: 696
May 26, 2020, 08:01:01 AM
I published the 1.6.
sr. member
Activity: 462
Merit: 696
May 26, 2020, 06:53:38 AM
The total number of jumps is equal to number of DP * 2^dpbit => 2^(28+28.7) = 2^56.6 (2 times more)
But I don't the total number of kangaroo to evaluate the overhead ?
For the seed I will switch to cryptosecure rng.

Kangaroos should collide at ~sqrt(n)/NumKangaroos jumps. Otherwise something is very wrong.

Right, i mean total number of jumps.
full member
Activity: 206
Merit: 447
May 26, 2020, 06:51:38 AM
An other things is when you reach ~sqrt(n) jumps, kangaroo starts to walk their neighbor.

Kangaroos should collide at ~sqrt(n)/NumKangaroos jumps. Otherwise something is very wrong.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 06:39:40 AM
Jean_Luc, how many jumps zielar reached in his work? The screenshot shows the total count 0 2^-inf - how to understand the real number of jumps?

If 2 clients choose a same starting position you have dead kangaroos since the beginning.
Here I need to improve the seed which is at the second, so if 2 clients starts at the same second they will be identical and dead will increase at high speed immediatly when they starts.

For better random, some entropy should be added (like the entropy taken from the computer digital fingerprint). Different machines will have different finger prints.
sr. member
Activity: 462
Merit: 696
May 26, 2020, 06:22:16 AM
No there is not yet feedback to eliminate dead kangaroo and when 2 kangaroos collide in the same herd they starts to produce a new dead at each dp but this is normal only one DP is added to the hash table.

An other things is when you reach ~sqrt(n) jumps, kangaroo starts to walk their neighbor.

As there is no feedback, the number of dead will increase a lot. At this stage, it is better to restart clients.

If 2 clients choose a same starting position you have dead kangaroos since the beginning.
Here I need to improve the seed which is at the second, so if 2 clients starts at the same second they will be identical and dead will increase at high speed immediatly when they starts.

To evaluate the chance of falling in a above trap, you need to know the total number of kangaroo which is used to determine the DP overhead.

I will also add a kangaroo counter in the server to help.

I will publish a release with server fix, -o option, idle client closure timeout to 1 hours, and add a kangaroo counter.

The probability to find the key at ~2*sqrt(n) jumps is roughly 50% and should be roughly 80% at 4*sqrt(n) jumps (this does dot take in consideration the DP overhead).

Edit: If you change NB_JUMP, you break the compatibility with previous work file.
Edit2: If you decrease DPbit, you will still be able to use the previous work file, a DP with 20 leading 0 bits is also a DP with 18 leading 0 bits...

full member
Activity: 281
Merit: 114
May 26, 2020, 05:24:16 AM
Here is the latest state of my work:


And a few questions:
1. Someone mentioned the percentage of chances of not finding the correct key. What percentage is it?
2. How to increase the likelihood of finding the correct key? By reducing "dp"?
3. Does increasing NB_JUMP have any effect on the work results ?? If so, to what value?
4. Does increasing NB_RUN have any effect on work results ?? If so, to what value?
5. Is the minimum next DP_COUNT threshold at which I should expect the key known? can it appear at any time?
6. ... or should I not expect him anymore?
7. After analyzing the posts of colleagues above - is the application written so as to eliminate ACTIONALLY "empty" jumps [ie e.g. eliminating the possibility that two clients are jumping the same, or repeats jumps that have already taken place?]

I am asking because I want to assess the sense of further work without making any changes

P.S. I start the server without -i save.work, because each subsequent restart that occurs - is the result of my manual shutdown to change the progress save file when it already takes over 1GB, as well as save progress every 600 seconds. Since yesterday - it does not disappoint.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 04:50:51 AM
45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
Agree, random is not so good, 1 rig 8x2080ti produce every 2h 35000 dead kangaroos. it is no matter how many time rig was restarted. Every time get +30-35k dead kangaroos.
i mean when merge files, on client side zero dead kangaroos.

I mean that random could be only one part of these "dead kangaroos".
Another thing is that server kills (while merging files) the kangaroos but does not give feed back to clients - so the kangaroos continue their paths, but during the next communication with server they are "killed" again and so on.

Without feedback from the server these "zombi" kangaroos will never be killed, and will continue their useless job jumping on the client machines. If the majority of client machine's kangaroos become zombi, they will burn the GPU resources, and the real collision will not happen. Or at least the probability of such collision become much lower.
sr. member
Activity: 617
Merit: 312
May 26, 2020, 04:21:56 AM
45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
Agree, random is not so good, 1 rig 8x2080ti produce every 2h 35000 dead kangaroos. it is no matter how many time rig was restarted. Every time get +30-35k dead kangaroos.
i mean when merge files, on client side zero dead kangaroos.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 04:15:33 AM
Quote
How many jump do you have in total at the moment for #110?
This is my -winfo from fully merged job to this time:


45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
sr. member
Activity: 617
Merit: 312
May 25, 2020, 11:59:53 PM
-snip-
Was trying to recompile because my Kangaroo.exe is a different name (have a few that I use with diff configs) but I keep getting errors such as:

Line 35: Constant not found: #STANDARD_RIGHTS_REQUIRED

Is it because I am using the free/demo version of purebasic?
What windows version you are using?
You can delete this constants line #PROCESS_ALL_ACCESS = #STANDARD_RIGHTS_REQUIRED | #SYNCHRONIZE | $FFF
this constant is not using anywhere(probably left over from previous projects)

edit: in worst-case running time should  2 √ N group operations. I think that zielar already done this, no?
sr. member
Activity: 462
Merit: 696
May 25, 2020, 10:36:05 PM
Thank you very much for your commitment. Without your immediate action it would not have happened in a month :-) 3 hours have just passed without restarting with 92 machines loaded, so I can confirm that the problem has been solved!

That's great !

On my side the server ends correctly after 5h25 on the 80bit test and the clients were correctly closed.

Code:
Kangaroo v1.5dbg
Start:B60E83280258A40F9CDF1649744D730D6E939DE92A2B00000000000000000000
Stop :B60E83280258A40F9CDF1649744D730D6E939DE92A2BE19BFFFFFFFFFFFFFFFF
Keys :1
Range width: 2^80
Expected operations: 2^41.05
Expected RAM: 344.2MB
DP size: 18 [0xFFFFC00000000000]
Kangaroo server is ready and listening to TCP port 17403 ...
[Client 3][DP Count 2^20.69/2^23.05][Dead 0][46:36][53.5/87.4MB]
New connection from 127.0.0.1:59904
[Client 4][DP Count 2^23.68/2^23.05][Dead 158][05:25:15][412.7/522.3MB]
Key# 0 [1S]Pub:  0x0284A930C243C0C2F67FDE3E0A98CE6DB0DB9AB5570DAD9338CADE6D181A431246
       Priv: 0xB60E83280258A40F9CDF1649744D730D6E939DE92A2BE19B0D19A3D64A1DE032

Closing connection with 172.24.9.18:51060

Closing connection with 172.24.9.18:51058

Closing connection with 127.0.0.1:59813

Closing connection with 127.0.0.1:59904
full member
Activity: 281
Merit: 114
May 25, 2020, 03:37:08 PM
I Would be very happy if #110 will be solved tomorrow Smiley

Thank you very much for your commitment. Without your immediate action it would not have happened in a month :-) 3 hours have just passed without restarting with 92 machines loaded, so I can confirm that the problem has been solved!
Jump to: