Author

Topic: Pollard's kangaroo ECDLP solver - page 127. (Read 55445 times)

full member
Activity: 277
Merit: 106
May 24, 2020, 04:39:02 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.
sr. member
Activity: 616
Merit: 312
May 24, 2020, 04:38:11 PM
-snip-
That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley


Those. The problem is most likely when the clients are not on the local network. Namely, the problem is caused by an error in working with sockets.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:32:11 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.

That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley
full member
Activity: 277
Merit: 106
May 24, 2020, 04:23:45 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)

I will be grateful for the code, because I don't think I will finish it :-(

Quote
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?

Yes, I use the original one. I will add an answer from -winfo work28.save
sr. member
Activity: 616
Merit: 312
May 24, 2020, 04:23:25 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:22:37 PM
Jean Luc - or anyone:

Precomp Table!

I want to do a precomputation of table, where only Tame Kangaroos are stored.

I tried tinkering with the current source code, but to no avail.

Do you know/can you tell me what I need to change in code or does it require a code overhaul?
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:20:09 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
sr. member
Activity: 616
Merit: 312
May 24, 2020, 04:15:10 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:13:59 PM
Yes, the dump file currently has 4.64GB and it takes about 30 seconds to reboot when restarting, and about the time it takes to write).
If I set the record every three minutes - it's still 2 minutes of work which significantly increases the time. I am writing about this because to further levels this can be a bothersome problem.
I removed -w from the command line and we'll see what the effect will be.


EDIT:
soooo baaaad...


What config is your server? CPU, RAM, etc?
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:11:44 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.

All I use is Windows...using the original server/server code from Jean Luc.
sr. member
Activity: 616
Merit: 312
May 24, 2020, 04:09:59 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
May 24, 2020, 04:07:30 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.


Server is fine. I have over 8GB file that I am reading and rewriting to.
sr. member
Activity: 462
Merit: 696
May 24, 2020, 03:56:40 PM
@zielar
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?
sr. member
Activity: 616
Merit: 312
May 24, 2020, 03:45:49 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)
full member
Activity: 277
Merit: 106
May 24, 2020, 03:42:47 PM
Yes, the dump file currently has 4.64GB and it takes about 30 seconds to reboot when restarting, and about the time it takes to write).
If I set the record every three minutes - it's still 2 minutes of work which significantly increases the time. I am writing about this because to further levels this can be a bothersome problem.
I removed -w from the command line and we'll see what the effect will be.


EDIT:
soooo baaaad...
sr. member
Activity: 462
Merit: 696
May 24, 2020, 03:36:50 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.
sr. member
Activity: 616
Merit: 312
May 24, 2020, 03:34:20 PM
That's right, but notice that I have a progress record saved to the file every 10 minutes, and the server disconnects every 1-5 minutes so I start from the last dump, so still from the same place
if server restart every 5 minutes change write interval to 3 minutes. I think for your server it is not a problem.
full member
Activity: 277
Merit: 106
May 24, 2020, 03:32:35 PM
That's right, but notice that I have a progress record saved to the file every 10 minutes, and the server disconnects every 1-5 minutes so I start from the last dump, so still from the same place.
In addition, I can see that after restart it starts again at 27.22, where there is more before restarting.
sr. member
Activity: 616
Merit: 312
May 24, 2020, 03:27:59 PM
Damn it will hit me soon! The server application is currently restarting every 5 minutes, which means that I start from the same place all the time :/

-snip-
not from the same place, server just restart kangaroo, all DP points are safe and continue grow.
The larger the DP counter, the slower it counts - this is natural
full member
Activity: 277
Merit: 106
May 24, 2020, 03:13:16 PM
Damn it will hit me soon! The server application is currently restarting every 5 minutes, which means that I start from the same place all the time :/

Jump to: