Author

Topic: Pollard's kangaroo ECDLP solver - page 123. (Read 55445 times)

member
Activity: 144
Merit: 10
May 26, 2020, 09:35:18 AM
Quote
How many jump do you have in total at the moment for #110?
This is my -winfo from fully merged job to this time:


Quote
could you share your work file?

I don't see a problem ... As soon as I start working on #115 :-)

In all of the experiments that I’ve done with Jean_Luc’s software, I don’t recall going that long (~4.6(sqrt(N)) without some sort of a collision with a properly chosen DP mask, something might be wrong there. But what I find even more interesting is your choice of a very high 28-bit mask, how did you decide that this was the proper choice for your setup?

Out of courtesy, I will give you until this coming Sunday to finish. If not, I will enter the race with 128 RTX 2080 Tis and 2TB of RAM (22-bit DP mask), and with that setup, my worst case is five days based on experimentation with my own implementation with distributed hash tables.

I wish you the best of luck and I hope that you succeed, we can learn a lot from your data on how to choose the optimal DP point mask.
full member
Activity: 277
Merit: 106
May 26, 2020, 09:25:47 AM
I published the 1.6.


Many thanks.
I will update clients and server after returning in about two hours because I'm out of reach.  I am asking for tips and suggestions on how best to solve it?  Which startup command should I use for each client and which one for the server?
sr. member
Activity: 462
Merit: 696
May 26, 2020, 09:01:01 AM
I published the 1.6.
sr. member
Activity: 462
Merit: 696
May 26, 2020, 07:53:38 AM
The total number of jumps is equal to number of DP * 2^dpbit => 2^(28+28.7) = 2^56.6 (2 times more)
But I don't the total number of kangaroo to evaluate the overhead ?
For the seed I will switch to cryptosecure rng.

Kangaroos should collide at ~sqrt(n)/NumKangaroos jumps. Otherwise something is very wrong.

Right, i mean total number of jumps.
full member
Activity: 204
Merit: 437
May 26, 2020, 07:51:38 AM
An other things is when you reach ~sqrt(n) jumps, kangaroo starts to walk their neighbor.

Kangaroos should collide at ~sqrt(n)/NumKangaroos jumps. Otherwise something is very wrong.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 07:39:40 AM
Jean_Luc, how many jumps zielar reached in his work? The screenshot shows the total count 0 2^-inf - how to understand the real number of jumps?

If 2 clients choose a same starting position you have dead kangaroos since the beginning.
Here I need to improve the seed which is at the second, so if 2 clients starts at the same second they will be identical and dead will increase at high speed immediatly when they starts.

For better random, some entropy should be added (like the entropy taken from the computer digital fingerprint). Different machines will have different finger prints.
sr. member
Activity: 462
Merit: 696
May 26, 2020, 07:22:16 AM
No there is not yet feedback to eliminate dead kangaroo and when 2 kangaroos collide in the same herd they starts to produce a new dead at each dp but this is normal only one DP is added to the hash table.

An other things is when you reach ~sqrt(n) jumps, kangaroo starts to walk their neighbor.

As there is no feedback, the number of dead will increase a lot. At this stage, it is better to restart clients.

If 2 clients choose a same starting position you have dead kangaroos since the beginning.
Here I need to improve the seed which is at the second, so if 2 clients starts at the same second they will be identical and dead will increase at high speed immediatly when they starts.

To evaluate the chance of falling in a above trap, you need to know the total number of kangaroo which is used to determine the DP overhead.

I will also add a kangaroo counter in the server to help.

I will publish a release with server fix, -o option, idle client closure timeout to 1 hours, and add a kangaroo counter.

The probability to find the key at ~2*sqrt(n) jumps is roughly 50% and should be roughly 80% at 4*sqrt(n) jumps (this does dot take in consideration the DP overhead).

Edit: If you change NB_JUMP, you break the compatibility with previous work file.
Edit2: If you decrease DPbit, you will still be able to use the previous work file, a DP with 20 leading 0 bits is also a DP with 18 leading 0 bits...

full member
Activity: 277
Merit: 106
May 26, 2020, 06:24:16 AM
Here is the latest state of my work:


And a few questions:
1. Someone mentioned the percentage of chances of not finding the correct key. What percentage is it?
2. How to increase the likelihood of finding the correct key? By reducing "dp"?
3. Does increasing NB_JUMP have any effect on the work results ?? If so, to what value?
4. Does increasing NB_RUN have any effect on work results ?? If so, to what value?
5. Is the minimum next DP_COUNT threshold at which I should expect the key known? can it appear at any time?
6. ... or should I not expect him anymore?
7. After analyzing the posts of colleagues above - is the application written so as to eliminate ACTIONALLY "empty" jumps [ie e.g. eliminating the possibility that two clients are jumping the same, or repeats jumps that have already taken place?]

I am asking because I want to assess the sense of further work without making any changes

P.S. I start the server without -i save.work, because each subsequent restart that occurs - is the result of my manual shutdown to change the progress save file when it already takes over 1GB, as well as save progress every 600 seconds. Since yesterday - it does not disappoint.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 05:50:51 AM
45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
Agree, random is not so good, 1 rig 8x2080ti produce every 2h 35000 dead kangaroos. it is no matter how many time rig was restarted. Every time get +30-35k dead kangaroos.
i mean when merge files, on client side zero dead kangaroos.

I mean that random could be only one part of these "dead kangaroos".
Another thing is that server kills (while merging files) the kangaroos but does not give feed back to clients - so the kangaroos continue their paths, but during the next communication with server they are "killed" again and so on.

Without feedback from the server these "zombi" kangaroos will never be killed, and will continue their useless job jumping on the client machines. If the majority of client machine's kangaroos become zombi, they will burn the GPU resources, and the real collision will not happen. Or at least the probability of such collision become much lower.
sr. member
Activity: 616
Merit: 312
May 26, 2020, 05:21:56 AM
45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
Agree, random is not so good, 1 rig 8x2080ti produce every 2h 35000 dead kangaroos. it is no matter how many time rig was restarted. Every time get +30-35k dead kangaroos.
i mean when merge files, on client side zero dead kangaroos.
sr. member
Activity: 443
Merit: 350
May 26, 2020, 05:15:33 AM
Quote
How many jump do you have in total at the moment for #110?
This is my -winfo from fully merged job to this time:


45 million dead kangaroos...

Is it due to "bad" random (different machines create the same kangaroos) or is these the same kangaroos killed several times during the synchronization with the server (as server does not send back the signal to kill the duplicate kangaroos on different machines, that kangaroos continue there paths, and all the subsequent DPs are also equal on different machines). I.e. server kills the kangaroo, but the "dead" kangaroo continues jumping on the client machine. Is it the case?
sr. member
Activity: 616
Merit: 312
May 26, 2020, 12:59:53 AM
-snip-
Was trying to recompile because my Kangaroo.exe is a different name (have a few that I use with diff configs) but I keep getting errors such as:

Line 35: Constant not found: #STANDARD_RIGHTS_REQUIRED

Is it because I am using the free/demo version of purebasic?
What windows version you are using?
You can delete this constants line #PROCESS_ALL_ACCESS = #STANDARD_RIGHTS_REQUIRED | #SYNCHRONIZE | $FFF
this constant is not using anywhere(probably left over from previous projects)

edit: in worst-case running time should  2 √ N group operations. I think that zielar already done this, no?
sr. member
Activity: 462
Merit: 696
May 25, 2020, 11:36:05 PM
Thank you very much for your commitment. Without your immediate action it would not have happened in a month :-) 3 hours have just passed without restarting with 92 machines loaded, so I can confirm that the problem has been solved!

That's great !

On my side the server ends correctly after 5h25 on the 80bit test and the clients were correctly closed.

Code:
Kangaroo v1.5dbg
Start:B60E83280258A40F9CDF1649744D730D6E939DE92A2B00000000000000000000
Stop :B60E83280258A40F9CDF1649744D730D6E939DE92A2BE19BFFFFFFFFFFFFFFFF
Keys :1
Range width: 2^80
Expected operations: 2^41.05
Expected RAM: 344.2MB
DP size: 18 [0xFFFFC00000000000]
Kangaroo server is ready and listening to TCP port 17403 ...
[Client 3][DP Count 2^20.69/2^23.05][Dead 0][46:36][53.5/87.4MB]
New connection from 127.0.0.1:59904
[Client 4][DP Count 2^23.68/2^23.05][Dead 158][05:25:15][412.7/522.3MB]
Key# 0 [1S]Pub:  0x0284A930C243C0C2F67FDE3E0A98CE6DB0DB9AB5570DAD9338CADE6D181A431246
       Priv: 0xB60E83280258A40F9CDF1649744D730D6E939DE92A2BE19B0D19A3D64A1DE032

Closing connection with 172.24.9.18:51060

Closing connection with 172.24.9.18:51058

Closing connection with 127.0.0.1:59813

Closing connection with 127.0.0.1:59904
full member
Activity: 277
Merit: 106
May 25, 2020, 04:37:08 PM
I Would be very happy if #110 will be solved tomorrow Smiley

Thank you very much for your commitment. Without your immediate action it would not have happened in a month :-) 3 hours have just passed without restarting with 92 machines loaded, so I can confirm that the problem has been solved!
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 25, 2020, 04:36:17 PM
This is my server/client app.
https://drive.google.com/open?id=1pnMcVPEV8b-cJszBiQKcZ6_AHIScgUO8
Both app work only on Windows x64!
In the archive, there are both compiled files, source codes and example .bat files.
So you can compile the executable yourself or use the ready-made one.
It is example of bat file to start server:
Code:
REM puzzle #110(109bit)

SET dpsize=31
SET wi=7200
SET beginrange=2000000000000000000000000000
SET endrange=3fffffffffffffffffffffffffff
SET pub=0309976ba5570966bf889196b7fdf5a0f9a1e9ab340556ec29f8bb60599616167d
SET workfile=savework
serverapp.exe -workfile %workfile% -dp %dpsize% -wi %wi% -beginrange %beginrange% -endrange %endrange% -pub %pub%
pause
-workfile  - it is filename of your masterfile, where merged all clients job
-wi          - this is job saving interval for client, 7200 mean the client will save his job every 2h and send to server,
              do not setup this value to small, the client must have time to send work before a new one appears.
Note! if you will use already existed masterfile, use only copy of masterfile and original masterfile save to safe place!!!

It is example of bat file to start client:
Code:
clientapp.exe -name rig1 -pool 127.0.0.1:8000 -t 0 -gpu -gpuId 0
pause
-name  - this is name of your rig, just for stats
-pool    - server address:port
-gpuId  - set dependency how many GPU you have on rig (-gpuId 0,1,2,3  ex for 4gpu)

Note! Before use app, make sure that you have good enternet bandwidth, because client will send BIG files(which also have kangaroo)!
When client connect first time he get job params form server(dpsize,wi,beginrange,endrange,pub)
You can see downloaded params in clentapp console.
After client send his job to server, server merge this job to masterfile and check collision during merge.
If server or client solve key, server app will create log file where will be dump of private key(the same as in server concole)
There possible to get telegramm notification when key solved, but i don`t think that is need.
Try server and client app on a small range to make sure you're doing everything right.

Was trying to recompile because my Kangaroo.exe is a different name (have a few that I use with diff configs) but I keep getting errors such as:

Line 35: Constant not found: #STANDARD_RIGHTS_REQUIRED

Is it because I am using the free/demo version of purebasic?
sr. member
Activity: 462
Merit: 696
May 25, 2020, 04:18:12 PM
I Would be very happy if #110 will be solved tomorrow Smiley
full member
Activity: 277
Merit: 106
May 25, 2020, 03:55:51 PM
Quote
How many jump do you have in total at the moment for #110?
This is my -winfo from fully merged job to this time:


Quote
could you share your work file?

I don't see a problem ... As soon as I start working on #115 :-)
full member
Activity: 427
Merit: 105
May 25, 2020, 03:42:41 PM
could you share your work file?
 Wink Wink
sr. member
Activity: 443
Merit: 350
May 25, 2020, 03:33:06 PM
Success! 01:40:00 has passed since the server was started and i don't have any restart !!!

How many jump do you have in total at the moment for #110?
full member
Activity: 277
Merit: 106
May 25, 2020, 03:21:59 PM
Success! 01:40:00 has passed since the server was started and i don't have any restart !!!
Jump to: