Author

Topic: Pollard's kangaroo ECDLP solver - page 133. (Read 59389 times)

sr. member
Activity: 642
Merit: 316
May 23, 2020, 07:10:02 AM
Hello,
I am asking you kindly for help in writing .bat which will automatically start the server process after it turns off. The problem is so frequent that I would have to sit around the clock to watch it.
I offer a reward for help if I manage to solve # 110

Server executable name: server.exe
DP value: 28

Referring to earlier guesses: the reason for shutting down the server is not on the RAM side, because the machine has 256GB of RAM, so there is no question that it is the cause.
You can use my launcher https://drive.google.com/open?id=1esYd5WFHnVjvSOj94rIzyFZZawj2QBRA
Do not forget test file in antivirus.
This launcher is very simple, he just lauch kangaroo.exe and wait while they quit after that he restart it again. You can see how many restart already done.
Drop this file to your server folder and replace kangaroo.exe with launcherkangaroo.exe in your bat file
full member
Activity: 282
Merit: 114
May 23, 2020, 07:01:55 AM
Hello,
I am asking you kindly for help in writing .bat which will automatically start the server process after it turns off. The problem is so frequent that I would have to sit around the clock to watch it.
I offer a reward for help if I manage to solve # 110

Server executable name: server.exe
DP value: 28

Referring to earlier guesses: the reason for shutting down the server is not on the RAM side, because the machine has 256GB of RAM, so there is no question that it is the cause.

---
EDIT: Something is wrong! Look at this dump from few seconds ago:
Code:
Unexpected wrong collision, reset kangaroo !
Found: Td-73xxxxxxxxx57E8567EB949E04A8
Found: Wd-34xxxxxxxxxE8567EB949E04A8

What does it mean?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 23, 2020, 04:19:16 AM
I'm not saying this solver doesn't work...it does it's job and does it well! I am just trying to figure the "trade off" with time vs. storage. I just didn't see any; and I know that's based on "expected" results. The theory behind time and storage makes sense, in general. But if you go with -dp 31, your storage is 1.6 Gb, as opposed to say -dp 25 which is 60 Gb. 2^55 - 31 = 2^24; 2^55 - 25 = 2^30. How do I know which one will be faster? or are they really the same as far as solve time, one just takes less storage than the other...
Faster comparison of less data. It is not speed that matters, but the probability of a collision. In this table, all jumps are approximately the same, perhaps for this reason the DP size does not have much effect. Tested with the same settings, but there have never been solutions with less than 50% ops. You can change the table and compare the results. https://bitcointalksearch.org/topic/pollards-kangaroo-ecdlp-solver-5244940

Ok, I figured out how to see the difference; my NB_JUMP has to be 32 or less and I can then see about a 2^7 difference in avg jumps at 40 bit range. Will test and compare later today.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 23, 2020, 04:03:04 AM
I'm not saying this solver doesn't work...it does it's job and does it well! I am just trying to figure the "trade off" with time vs. storage. I just didn't see any; and I know that's based on "expected" results. The theory behind time and storage makes sense, in general. But if you go with -dp 31, your storage is 1.6 Gb, as opposed to say -dp 25 which is 60 Gb. 2^55 - 31 = 2^24; 2^55 - 25 = 2^30. How do I know which one will be faster? or are they really the same as far as solve time, one just takes less storage than the other...
Faster comparison of less data. It is not speed that matters, but the probability of a collision. In this table, all jumps are approximately the same, perhaps for this reason the DP size does not have much effect. Tested with the same settings, but there have never been solutions with less than 50% ops. You can change the table and compare the results. https://bitcointalksearch.org/topic/pollards-kangaroo-ecdlp-solver-5244940

This is what I get from yours:
Quote
Jump: 0 Distance: 1
Jump Avg distance: 2^24.77
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13
Jump: 0 Distance: 1
Jump Avg distance: 2^25.13

which is about 2^1 lower than original code
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 23, 2020, 03:54:22 AM
I'm not saying this solver doesn't work...it does it's job and does it well! I am just trying to figure the "trade off" with time vs. storage. I just didn't see any; and I know that's based on "expected" results. The theory behind time and storage makes sense, in general. But if you go with -dp 31, your storage is 1.6 Gb, as opposed to say -dp 25 which is 60 Gb. 2^55 - 31 = 2^24; 2^55 - 25 = 2^30. How do I know which one will be faster? or are they really the same as far as solve time, one just takes less storage than the other...
Faster comparison of less data. It is not speed that matters, but the probability of a collision. In this table, all jumps are approximately the same, perhaps for this reason the DP size does not have much effect. Tested with the same settings, but there have never been solutions with less than 50% ops. You can change the table and compare the results. https://bitcointalksearch.org/topic/pollards-kangaroo-ecdlp-solver-5244940

I put in your code to compare; how do I know if it's using your small jumps or original jumps?

for (int i = 1; i < NB_JUMP; i++) {
      
      if (i < small_jump) {

What number has to be less than small_jump to let the code know to use the small jump code?
sr. member
Activity: 661
Merit: 250
May 22, 2020, 11:27:56 PM
In multi gpu system, is PCIE bandwith important ?
Will you support AMD opencl version ?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 10:55:05 PM
Okay, I'm not understanding this DP time/memory trade off.

Maybe I have over thought it.  Running several tests with same configurations and only changing the -dp, whether I was at DP 14 - 31, the expected time and 2^ops stayed the same, the only thing that changed was the amount of expected RAM.

Anyone explain it better?

Well, finally, we got to the truth, and I expected this when the jump table changed, add NB_JUMP option and leveling the length of the jumps (It is clear that accessing a larger amount of constant GPU memory replaces the speed). Who had the results of 10% or 25% of operations? That should be! Small jumps should be present in the table.

I'm not saying this solver doesn't work...it does it's job and does it well! I am just trying to figure the "trade off" with time vs. storage. I just didn't see any; and I know that's based on "expected" results. The theory behind time and storage makes sense, in general. But if you go with -dp 31, your storage is 1.6 Gb, as opposed to say -dp 25 which is 60 Gb. 2^55 - 31 = 2^24; 2^55 - 25 = 2^30. How do I know which one will be faster? or are they really the same as far as solve time, one just takes less storage than the other...
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 05:05:12 PM
Okay, I'm not understanding this DP time/memory trade off.

Maybe I have over thought it.  Running several tests with same configurations and only changing the -dp, whether I was at DP 14 - 31, the expected time and 2^ops stayed the same, the only thing that changed was the amount of expected RAM.

Anyone explain it better?

sr. member
Activity: 642
Merit: 316
May 22, 2020, 02:33:27 PM
-snip-

I am interested in your merge process...what type of program/script/code are you using to auto merge files of clients?
look there https://bitcointalksearch.org/topic/m.54425721
it is simple server app and client app.
Client app got job from server like range,key,saving interval,dp size.
Do this job and save job (with kangaroos)  in saving interval time, than app check this file and send to server.
server app check file and with kangaroo exe merge to workfile.
The server does not support constant communication with clients. Moreover, the initiators of session are always the client.
So i don`t have any error in socket.
And ther 2 ways to find key:
- on client side, and client app will send key to server
- or during merge files.
The only thing that may not be very convenient is that the client files are quite weighty from 600-1700mb (if client have good enternet it is not a problem send 1.7Gb in 200s)
But this is not such a big problem such as they are sent every 2 hours.
Once key solve server app do log file and send notify to telegramm.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 02:21:59 PM
I am thinking about the sense of the current version over # 110. Well:
1. Absolutely: in the time interval of 60-120 minutes - the application started in server mode goes offline, causing the task to be suspended, causing an obstacle in the form of restarting, and as you know - any break or obstacle - it is not known what it brings.
2. In my case, the reduction of NB_JUMP to 16 raised the hashrate to 7200MK / s. I am beginning to consider whether this change in code has actually benefited or will have a negative impact.
3. After manually restarting the server with the command
Code:
Kangaroo.exe -nt 36000000 -w saved.work -i saved.work -wi 300 -d 28 -s input.txt
...the number that was in DEAD is counted again from zero. Is it alright?
4. How will the result be dump if it is found? I implemented a code add-on from a friend's post a few posts before me.
5. Does the indicator next to hashrate 2 ^ XX.XX / 2 ^ YY.YY mean that reaching X to level Y is the maximum time needed for the solution?
1) you can make easy app that will restart server if it crashed.
But I refused the server which is built into the program, because it constantly shuts down.
I wrote my simple server that merge the work files of clients
2) i don`t touch any constants in file and use default NB_JUMP=32
3)yes, server kangaroo restart, because server don`t save kangaroo in work file.
4) you already do that
5) there no maximum time needed, there only expected time. For your choice DP=28 you should find around 2^55.5-28=2^27.5 DP

I am interested in your merge process...what type of program/script/code are you using to auto merge files of clients?
sr. member
Activity: 642
Merit: 316
May 22, 2020, 02:05:31 PM
I am thinking about the sense of the current version over # 110. Well:
1. Absolutely: in the time interval of 60-120 minutes - the application started in server mode goes offline, causing the task to be suspended, causing an obstacle in the form of restarting, and as you know - any break or obstacle - it is not known what it brings.
2. In my case, the reduction of NB_JUMP to 16 raised the hashrate to 7200MK / s. I am beginning to consider whether this change in code has actually benefited or will have a negative impact.
3. After manually restarting the server with the command
Code:
Kangaroo.exe -nt 36000000 -w saved.work -i saved.work -wi 300 -d 28 -s input.txt
...the number that was in DEAD is counted again from zero. Is it alright?
4. How will the result be dump if it is found? I implemented a code add-on from a friend's post a few posts before me.
5. Does the indicator next to hashrate 2 ^ XX.XX / 2 ^ YY.YY mean that reaching X to level Y is the maximum time needed for the solution?
1) you can make easy app that will restart server if it crashed.
But I refused the server which is built into the program, because it constantly shuts down.
I wrote my simple server that merge the work files of clients
2) i don`t touch any constants in file and use default NB_JUMP=32
3)yes, server kangaroo restart, because server don`t save kangaroo in work file.
4) you already do that
5) there no maximum time needed, there only expected time. For your choice DP=28 you should find around 2^55.5-28=2^27.5 DP
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 02:03:24 PM
I am thinking about the sense of the current version over # 110. Well:
1. Absolutely: in the time interval of 60-120 minutes - the application started in server mode goes offline, causing the task to be suspended, causing an obstacle in the form of restarting, and as you know - any break or obstacle - it is not known what it brings.
2. In my case, the reduction of NB_JUMP to 16 raised the hashrate to 7200MK / s. I am beginning to consider whether this change in code has actually benefited or will have a negative impact.
3. After manually restarting the server with the command
Code:
Kangaroo.exe -nt 36000000 -w saved.work -i saved.work -wi 300 -d 28 -s input.txt
...the number that was in DEAD is counted again from zero. Is it alright?
4. How will the result be dump if it is found? I implemented a code add-on from a friend's post a few posts before me.
5. Does the indicator next to hashrate 2 ^ XX.XX / 2 ^ YY.YY mean that reaching X to level Y is the maximum time needed for the solution?

Also, I haven't ever used the -nt option, try and remove it and up your -wi from 300 to 600; unless RAM on your server is an issue/bottleneck.

As a workaround until the actual server problem is figured out, maybe create a batch file to start and then taskkill the server application every 30 minutes; so if you still use -wi 300 (every 5 minutes) you will get 6 saves in and then the batch script will restart for another 30 minutes. At least this way you don't have to constantly monitor if it's online or not. Set the batch file up with a :loop and taskkill after 30 minutes...the server app will rinse and repeat.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 01:57:44 PM
I am thinking about the sense of the current version over # 110. Well:
1. Absolutely: in the time interval of 60-120 minutes - the application started in server mode goes offline, causing the task to be suspended, causing an obstacle in the form of restarting, and as you know - any break or obstacle - it is not known what it brings.
2. In my case, the reduction of NB_JUMP to 16 raised the hashrate to 7200MK / s. I am beginning to consider whether this change in code has actually benefited or will have a negative impact.
3. After manually restarting the server with the command
Code:
Kangaroo.exe -nt 36000000 -w saved.work -i saved.work -wi 300 -d 28 -s input.txt
...the number that was in DEAD is counted again from zero. Is it alright?
4. How will the result be dump if it is found? I implemented a code add-on from a friend's post a few posts before me.
5. Does the indicator next to hashrate 2 ^ XX.XX / 2 ^ YY.YY mean that reaching X to level Y is the maximum time needed for the solution?

#5 (if you are talking server) first 2^ is your current DP# the second 2^ is what is expected, DP wise, to find solution.
#4 yeah, I added my own print to text code for when key is found
#3 I had same thing happen once, it as well showed zero when restarted because I don't think the table/file keeps the dead kangaroos
#2 go back through the posts, Jean Luc did a test on number of jumps, from 32 - 512; 16 seems low, but who knows
#1 hopefully breaks/restarts have no impact or everybody will be in same boat as I am sure everyone has had to stop and restart, intentionally or not, at some point or another

Did you mess around with DP number? What DP did you go with? The lower it is, it will also affect the MKey/s rate.
full member
Activity: 282
Merit: 114
May 22, 2020, 01:34:00 PM
I am thinking about the sense of the current version over # 110. Well:
1. Absolutely: in the time interval of 60-120 minutes - the application started in server mode goes offline, causing the task to be suspended, causing an obstacle in the form of restarting, and as you know - any break or obstacle - it is not known what it brings.
2. In my case, the reduction of NB_JUMP to 16 raised the hashrate to 7200MK / s. I am beginning to consider whether this change in code has actually benefited or will have a negative impact.
3. After manually restarting the server with the command
Code:
Kangaroo.exe -nt 36000000 -w saved.work -i saved.work -wi 300 -d 28 -s input.txt
...the number that was in DEAD is counted again from zero. Is it alright?
4. How will the result be dump if it is found? I implemented a code add-on from a friend's post a few posts before me.
5. Does the indicator next to hashrate 2 ^ XX.XX / 2 ^ YY.YY mean that reaching X to level Y is the maximum time needed for the solution?
full member
Activity: 282
Merit: 114
May 22, 2020, 10:48:19 AM
Quote
it is from 3 GPU v100 7200MK / s ?
From 4 v100
sr. member
Activity: 642
Merit: 316
May 22, 2020, 10:38:08 AM
Okay ... Thanks again for the information.
The server actually turns off randomly (and not like I wrote every hour), and eventually I managed to achieve the highest hashrate (7200MK / s) by changing NB_JUMP to 16 and setting -g 400,512
It is true that it shows me that it consumes memory -1687.00MB, but it does not affect the effects, where with the setting -g 320,512 it shows without "-" 1983.00MB) and hashrate is ~ 7000
greetings
it is from 3 GPU v100 7200MK / s ?
full member
Activity: 282
Merit: 114
May 22, 2020, 10:35:40 AM
Okay ... Thanks again for the information.
The server actually turns off randomly (and not like I wrote every hour), and eventually I managed to achieve the highest hashrate (7200MK / s) by changing NB_JUMP to 16 and setting -g 400,512
It is true that it shows me that it consumes memory -1687.00MB, but it does not affect the effects, where with the setting -g 320,512 it shows without "-" 1983.00MB) and hashrate is ~ 7000
greetings
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 22, 2020, 10:07:17 AM
Thanks for the information. One more thing does not give me peace ... After changing the code value NB_JUMP 64 the total hashrate drops from ~ 7000 to ~ 5700. Well, I understand that option 32 was better?
and what does it look like with NB_RUN?

I had that problem as well but only when I bumped up the NBJUMP past 128...But I'm not using Tesla's so I imagine the more powerful the card, the more that number affects performance even as small as moving it to 64.
sr. member
Activity: 642
Merit: 316
May 22, 2020, 09:55:17 AM
Many thanks for the reply and explanation "-d"
Unfortunately, I have to watch the server, because it just ends every hour for no reason. What could be the reason?
Can I resume working clients (who are still waiting for the server to reappear) with the -i save.work option? Because the resumption of the server theoretically looks like it continued where it ended and the clients act as if nothing happened :-)
i have the same problem with server on WIndows. It is just shutdown without any error or some reasons.

full member
Activity: 282
Merit: 114
May 22, 2020, 09:40:22 AM
Thanks for the information. One more thing does not give me peace ... After changing the code value NB_JUMP 64 the total hashrate drops from ~ 7000 to ~ 5700. Well, I understand that option 32 was better?
and what does it look like with NB_RUN?
Jump to: