Author

Topic: Pollard's kangaroo ECDLP solver - page 129. (Read 59389 times)

full member
Activity: 282
Merit: 114
May 25, 2020, 02:52:00 AM
This is my server/client app.
https://drive.google.com/open?id=1pnMcVPEV8b-cJszBiQKcZ6_AHIScgUO8
Both app work only on Windows x64!
In the archive, there are both compiled files, source codes and example .bat files.
So you can compile the executable yourself or use the ready-made one.
It is example of bat file to start server:
Code:
REM puzzle #110(109bit)

SET dpsize=31
SET wi=7200
SET beginrange=2000000000000000000000000000
SET endrange=3fffffffffffffffffffffffffff
SET pub=0309976ba5570966bf889196b7fdf5a0f9a1e9ab340556ec29f8bb60599616167d
SET workfile=savework
serverapp.exe -workfile %workfile% -dp %dpsize% -wi %wi% -beginrange %beginrange% -endrange %endrange% -pub %pub%
pause
-workfile  - it is filename of your masterfile, where merged all clients job
-wi          - this is job saving interval for client, 7200 mean the client will save his job every 2h and send to server,
              do not setup this value to small, the client must have time to send work before a new one appears.
Note! if you will use already existed masterfile, use only copy of masterfile and original masterfile save to safe place!!!

It is example of bat file to start client:
Code:
clientapp.exe -name rig1 -pool 127.0.0.1:8000 -t 0 -gpu -gpuId 0
pause
-name  - this is name of your rig, just for stats
-pool    - server address:port
-gpuId  - set dependency how many GPU you have on rig (-gpuId 0,1,2,3  ex for 4gpu)

Note! Before use app, make sure that you have good enternet bandwidth, because client will send BIG files(which also have kangaroo)!
When client connect first time he get job params form server(dpsize,wi,beginrange,endrange,pub)
You can see downloaded params in clentapp console.
After client send his job to server, server merge this job to masterfile and check collision during merge.
If server or client solve key, server app will create log file where will be dump of private key(the same as in server concole)
There possible to get telegramm notification when key solved, but i don`t think that is need.
Try server and client app on a small range to make sure you're doing everything right.

Looks awesome! Can one also add in gridsize with the -g option on client side?

Thank you very much! I appreciate the help!
Should I understand that this requires new clients to be started?
At this time, it probably won't be the best idea to start a new customer run from the beginning :-)
sr. member
Activity: 642
Merit: 316
May 25, 2020, 02:33:54 AM
-snip-

Looks awesome! Can one also add in gridsize with the -g option on client side?
all params that you are using with kangaroo.exe can be used in clientapp like -g and so on
Client app send all this params to kangaroo.exe and launch kangaroo.exe like child process.
Only -d,-w,-wi,-ws,-wt and input file is set by server, all other you can set as you wish.
i use this bat to my local rtx:
Code:
clientapp.exe -name 1x2080ti -pool 127.0.0.1:8000 -t 0 -g 136,256 -gpu -gpuId 0
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 25, 2020, 02:27:41 AM
This is my server/client app.
https://drive.google.com/open?id=1pnMcVPEV8b-cJszBiQKcZ6_AHIScgUO8
Both app work only on Windows x64!
In the archive, there are both compiled files, source codes and example .bat files.
So you can compile the executable yourself or use the ready-made one.
It is example of bat file to start server:
Code:
REM puzzle #110(109bit)

SET dpsize=31
SET wi=7200
SET beginrange=2000000000000000000000000000
SET endrange=3fffffffffffffffffffffffffff
SET pub=0309976ba5570966bf889196b7fdf5a0f9a1e9ab340556ec29f8bb60599616167d
SET workfile=savework
serverapp.exe -workfile %workfile% -dp %dpsize% -wi %wi% -beginrange %beginrange% -endrange %endrange% -pub %pub%
pause
-workfile  - it is filename of your masterfile, where merged all clients job
-wi          - this is job saving interval for client, 7200 mean the client will save his job every 2h and send to server,
              do not setup this value to small, the client must have time to send work before a new one appears.
Note! if you will use already existed masterfile, use only copy of masterfile and original masterfile save to safe place!!!

It is example of bat file to start client:
Code:
clientapp.exe -name rig1 -pool 127.0.0.1:8000 -t 0 -gpu -gpuId 0
pause
-name  - this is name of your rig, just for stats
-pool    - server address:port
-gpuId  - set dependency how many GPU you have on rig (-gpuId 0,1,2,3  ex for 4gpu)

Note! Before use app, make sure that you have good enternet bandwidth, because client will send BIG files(which also have kangaroo)!
When client connect first time he get job params form server(dpsize,wi,beginrange,endrange,pub)
You can see downloaded params in clentapp console.
After client send his job to server, server merge this job to masterfile and check collision during merge.
If server or client solve key, server app will create log file where will be dump of private key(the same as in server concole)
There possible to get telegramm notification when key solved, but i don`t think that is need.
Try server and client app on a small range to make sure you're doing everything right.

Looks awesome! Can one also add in gridsize with the -g option on client side?
sr. member
Activity: 642
Merit: 316
May 25, 2020, 01:33:28 AM
This is my server/client app.
https://drive.google.com/open?id=1pnMcVPEV8b-cJszBiQKcZ6_AHIScgUO8
Both app work only on Windows x64!
In the archive, there are both compiled files, source codes and example .bat files.
So you can compile the executable yourself or use the ready-made one.
It is example of bat file to start server:
Code:
REM puzzle #110(109bit)

SET dpsize=31
SET wi=7200
SET beginrange=2000000000000000000000000000
SET endrange=3fffffffffffffffffffffffffff
SET pub=0309976ba5570966bf889196b7fdf5a0f9a1e9ab340556ec29f8bb60599616167d
SET workfile=savework
serverapp.exe -workfile %workfile% -dp %dpsize% -wi %wi% -beginrange %beginrange% -endrange %endrange% -pub %pub%
pause
-workfile  - it is filename of your masterfile, where merged all clients job
-wi          - this is job saving interval for client, 7200 mean the client will save his job every 2h and send to server,
              do not setup this value to small, the client must have time to send work before a new one appears.
Note! if you will use already existed masterfile, use only copy of masterfile and original masterfile save to safe place!!!

It is example of bat file to start client:
Code:
clientapp.exe -name rig1 -pool 127.0.0.1:8000 -t 0 -gpu -gpuId 0
pause
-name  - this is name of your rig, just for stats
-pool    - server address:port
-gpuId  - set dependency how many GPU you have on rig (-gpuId 0,1,2,3  ex for 4gpu)

Note! Before use app, make sure that you have good enternet bandwidth, because client will send BIG files(which also have kangaroo)!
When client connect first time he get job params form server(dpsize,wi,beginrange,endrange,pub)
You can see downloaded params in clentapp console.
After client send his job to server, server merge this job to masterfile and check collision during merge.
If server or client solve key, server app will create log file where will be dump of private key(the same as in server concole)
There possible to get telegramm notification when key solved, but i don`t think that is need.
Try server and client app on a small range to make sure you're doing everything right.
full member
Activity: 282
Merit: 114
May 24, 2020, 11:46:02 PM
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

I don't think the problem come from the network. When you restart the server, all 92 clients reconnect well, the server works a bit and crashes.
I will make further tests and try to reproduce the issue.

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)

Yes as said above, it is an estimation and this estimation does not take in consideration the overhead of the DP method which is linked to the total number of kangaroo. The server does not know how many kangaroos are working.
And if you restart all clients, then it is like multiplying the number of kangaroo by 2 and stop the half of kangaroo which creates an overhead. I will also add the support of -ws for clients.


OKAY. Many thanks for fast release version with improvements
sr. member
Activity: 462
Merit: 701
May 24, 2020, 11:26:49 PM
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

I don't think the problem come from the network. When you restart the server, all 92 clients reconnect well, the server works a bit and crashes.
I will make further tests and try to reproduce the issue.

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)

Yes as said above, it is an estimation and this estimation does not take in consideration the overhead of the DP method which is linked to the total number of kangaroo. The server does not know how many kangaroos are working.
And if you restart all clients, then it is like multiplying the number of kangaroo by 2 and stop the half of kangaroo which creates an overhead. I will also add the support of -ws for clients.
jr. member
Activity: 91
Merit: 3
May 24, 2020, 11:06:03 PM
Try merge your safe files or make a secound run from beginning and then merge
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 10:21:18 PM
Quote
Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

----
And now another question:

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)


Remember, it's an "Expected" and estimation...you could solve in 2^25 or 2^29...or the other chance of not finding it at all. I can't remember the percentage of not finding it at all but it's in the code somewhere.

This isn't straight forward like Bitcrack.

Hopefully you solve it soon!
full member
Activity: 282
Merit: 114
May 24, 2020, 09:56:36 PM
Quote
Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.
My server is secure. I also checked the logs and I don't have any unknown IPs. Everyone else has a problem, so that's not the problem.

----
And now another question:

[DP COUNT 2^27.60 / 2^27.55] - I don't understand again ... (?)
sr. member
Activity: 443
Merit: 350
May 24, 2020, 07:33:31 PM
-snip-
- The running server restarts at random times within 3-15 minutes
-snip-

Is you server in local net or open to worldwide?
If open, try to check all the connections and be sure that all that connections are made from your IPs.

The GPU solver's server has no authentication. Plus there are a lot of ip/port scanner web tools. So, some other users (not you) or scanners may cause your server restart.
This is just an assumption. I am not sure.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 06:03:20 PM
It just happens that it restarts BEFORE saving - and that's why this is a PROBLEM. Maybe I don't understand something here, because for me logic - using a new work file will start work from the beginning ...?

Maybe I don't understand the solver. I thought a DP was a DP regardless of how/where/when it was found (as long as it's the same number, i.e. both work files have DP size of 28). Whether a DP is found at the end of a 6 GB file or at the beginning of a 1 GB file, doesn't matter. That's the point of the merge file, to check all DPs and see if their is a collision.

Think about this. Let's say you have zero issues with server and clients and they run straight for 50 days but then a power outage happens. You restart your program but new DPs found with different kangaroo starting points will go into the saved work file. Do you think those DPs found at new starting points don't count? or make you start from scratch? All kangaroos will be different, i.e. different starting points...

I have merged many files into one, with restarts. If this doesn't act the same as one long running file, I am screwed and will never find key Smiley

But to answer you. For smaller range keys (56-80 bits), I have stopped and started and merged many files and it still found the key. So hopefully the same will be true for 110 bits.
full member
Activity: 282
Merit: 114
May 24, 2020, 05:51:07 PM
It just happens that it restarts BEFORE saving - and that's why this is a PROBLEM. Maybe I don't understand something here, because for me logic - using a new work file will start work from the beginning ...?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 05:31:45 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?

Maybe I'm not following you. Earlier you said you had file saving every 10 minutes and server restart below that. So that made since why your work file was not growing with saved DPs. 1 minute read/write is a lot when you are getting higher up. If you start a new save file, reduce the read/write to 0s, you are getting more work done. Example, if you are saving every 5 minutes, and it takes 1 minute to read/write then you are losing 12 minutes of work time every hour, as opposed to less than a minute with a smaller work file. I don't know. Again, maybe I'm not tracking.
Also, if your server restarts every 3-15 minutes, who's to say it doesn't restart just prior to it saving?
full member
Activity: 282
Merit: 114
May 24, 2020, 04:53:56 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:42:34 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter

Makes sense. What do we need to set it up?
sr. member
Activity: 642
Merit: 316
May 24, 2020, 04:20:24 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter
jr. member
Activity: 30
Merit: 122
May 24, 2020, 04:18:35 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.

A distributed project like this is probably necessary if we want to solve the puzzles in the 130-160 bit range.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:10:12 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:08:29 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
full member
Activity: 282
Merit: 114
May 24, 2020, 03:39:02 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.
Jump to: