Author

Topic: Pollard's kangaroo ECDLP solver - page 130. (Read 59389 times)

sr. member
Activity: 642
Merit: 316
May 24, 2020, 03:38:11 PM
-snip-
That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley


Those. The problem is most likely when the clients are not on the local network. Namely, the problem is caused by an error in working with sockets.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:32:11 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.

That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley
full member
Activity: 282
Merit: 114
May 24, 2020, 03:23:45 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)

I will be grateful for the code, because I don't think I will finish it :-(

Quote
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?

Yes, I use the original one. I will add an answer from -winfo work28.save
sr. member
Activity: 642
Merit: 316
May 24, 2020, 03:23:25 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:22:37 PM
Jean Luc - or anyone:

Precomp Table!

I want to do a precomputation of table, where only Tame Kangaroos are stored.

I tried tinkering with the current source code, but to no avail.

Do you know/can you tell me what I need to change in code or does it require a code overhaul?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:20:09 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
sr. member
Activity: 642
Merit: 316
May 24, 2020, 03:15:10 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:13:59 PM
Yes, the dump file currently has 4.64GB and it takes about 30 seconds to reboot when restarting, and about the time it takes to write).
If I set the record every three minutes - it's still 2 minutes of work which significantly increases the time. I am writing about this because to further levels this can be a bothersome problem.
I removed -w from the command line and we'll see what the effect will be.


EDIT:
soooo baaaad...


What config is your server? CPU, RAM, etc?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:11:44 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.

All I use is Windows...using the original server/server code from Jean Luc.
sr. member
Activity: 642
Merit: 316
May 24, 2020, 03:09:59 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:07:30 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.


Server is fine. I have over 8GB file that I am reading and rewriting to.
sr. member
Activity: 462
Merit: 701
May 24, 2020, 02:56:40 PM
@zielar
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?
sr. member
Activity: 642
Merit: 316
May 24, 2020, 02:45:49 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)
full member
Activity: 282
Merit: 114
May 24, 2020, 02:42:47 PM
Yes, the dump file currently has 4.64GB and it takes about 30 seconds to reboot when restarting, and about the time it takes to write).
If I set the record every three minutes - it's still 2 minutes of work which significantly increases the time. I am writing about this because to further levels this can be a bothersome problem.
I removed -w from the command line and we'll see what the effect will be.


EDIT:
soooo baaaad...
sr. member
Activity: 462
Merit: 701
May 24, 2020, 02:36:50 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.
sr. member
Activity: 642
Merit: 316
May 24, 2020, 02:34:20 PM
That's right, but notice that I have a progress record saved to the file every 10 minutes, and the server disconnects every 1-5 minutes so I start from the last dump, so still from the same place
if server restart every 5 minutes change write interval to 3 minutes. I think for your server it is not a problem.
full member
Activity: 282
Merit: 114
May 24, 2020, 02:32:35 PM
That's right, but notice that I have a progress record saved to the file every 10 minutes, and the server disconnects every 1-5 minutes so I start from the last dump, so still from the same place.
In addition, I can see that after restart it starts again at 27.22, where there is more before restarting.
sr. member
Activity: 642
Merit: 316
May 24, 2020, 02:27:59 PM
Damn it will hit me soon! The server application is currently restarting every 5 minutes, which means that I start from the same place all the time :/

-snip-
not from the same place, server just restart kangaroo, all DP points are safe and continue grow.
The larger the DP counter, the slower it counts - this is natural
full member
Activity: 282
Merit: 114
May 24, 2020, 02:13:16 PM
Damn it will hit me soon! The server application is currently restarting every 5 minutes, which means that I start from the same place all the time :/

sr. member
Activity: 462
Merit: 701
May 24, 2020, 12:12:01 PM
Thanks Wink

Yes my holidays in the mountains was vey good Smiley

Quote
Jean_Luc in earlier posts I read that the mechanism that proposes the best DP value to perform the selected task requires refinement, yes?
My question is: what DP value will best be used for #115 for V100 and 2080Ti cards?
I have ~400GB RAM and 2TB of disk space available

The best choice should be made taking into account the total number of kangaroos.
You have to choose nbKangaroo*2^(dpbit) the smallest possible comparing to 2^58.5.
If you have 400GB of RAM, choose dpbit to have an expected RAM of 200GB max.
Jump to: