Author

Topic: Pollard's kangaroo ECDLP solver - page 129. (Read 58706 times)

full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 05:31:45 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?

Maybe I'm not following you. Earlier you said you had file saving every 10 minutes and server restart below that. So that made since why your work file was not growing with saved DPs. 1 minute read/write is a lot when you are getting higher up. If you start a new save file, reduce the read/write to 0s, you are getting more work done. Example, if you are saving every 5 minutes, and it takes 1 minute to read/write then you are losing 12 minutes of work time every hour, as opposed to less than a minute with a smaller work file. I don't know. Again, maybe I'm not tracking.
Also, if your server restarts every 3-15 minutes, who's to say it doesn't restart just prior to it saving?
full member
Activity: 281
Merit: 114
May 24, 2020, 04:53:56 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.

I don't quite understand ...
A total of ~ 90 separate operating systems are working on the solution and they are all connected to one server, whose progress record is set to 5 minutes.
In the current phase:
- The file with saved progress takes 5GB
- Reading / reading the progress file takes ~ 30 seconds, or a total of one minute
- The running server restarts at random times within 3-15 minutes

I don't know where in my case the best option are any combinations with joining files?

P.S. It just enlightened me :-) - is this problem also on LINUX?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:42:34 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter

Makes sense. What do we need to set it up?
sr. member
Activity: 617
Merit: 312
May 24, 2020, 04:20:24 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
not size of workfiles, but size of DP counter.
Check master file dp counter>merge client job>check master file dp counter>difference save to client account.
When key solved, client % = client account dpcounter * 100 / masterfile dpcounter
jr. member
Activity: 30
Merit: 122
May 24, 2020, 04:18:35 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.

A distributed project like this is probably necessary if we want to solve the puzzles in the 130-160 bit range.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:10:12 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.

Why don't you just start a new save work file, so it doesn't take so long to save/read/rewrite to, and then just merge your work files?? This is your best option.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 04:08:29 PM
so we all got no chance again,
can close this thread now, guess after all finding money is a job.

This topic is not about making money, but about the ability to cut a difficult task like finding the key to the puzzle 110.
Which before that was extremely difficult to solve.
Moreover, JeanLuc has a couple more tricks in the warehouse to improve his brainchild, I'm sure.
After all, no one forbids joining in the search for a key. Why spend each separately a bunch of electricity and money, if you can combine work and do it much more efficiently.
Can create a common pool and solve problems together, divide production in proportion to the work performed.

I like the idea. How would it work, for the division of production?

If nothing else, settle on a DP to use and share equal sized workfiles each day. So if workfiles was to be 10 mb each day and it took 100 mb to solve, then anyone who submitted a 10 mb workfile, would get 10 percent of prize. If they submitted more than 10 mb, say 20 mb, then they get 20 percent of prize. Only problem is trust, using this method.

But I like the concept and would be willing to join.
full member
Activity: 281
Merit: 114
May 24, 2020, 03:39:02 PM
Here is my -winfo dump:
Code:
Kangaroo v1.5
Loading: save28.work
Version   : 0
DP bits   : 28
Start     : 2000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 0309976BA5570966BF889196B7FDF5A0F9A1E9AB340556EC29F8BB60599616167D
Count     : 0 2^-inf
Time      : 00s
DP Size   : 4872.4/6097.0MB
DP Count  : 159593967 2^27.250
HT Max    : 729 [@ 011F52]
HT Min    : 497 [@ 0287B2]
HT Avg    : 608.80
HT SDev   : 24.65
Kangaroos : 0 2^-inf

Disabling the -w option resulted in no record of progress, but did not produce any results.
sr. member
Activity: 617
Merit: 312
May 24, 2020, 03:38:11 PM
-snip-
That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley


Those. The problem is most likely when the clients are not on the local network. Namely, the problem is caused by an error in working with sockets.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:32:11 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.

That's what I was wondering. Yes, I own all of my clients. They all reside in my house, but none pay rent as of late Smiley
full member
Activity: 281
Merit: 114
May 24, 2020, 03:23:45 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.

I use my version of server client app and do not get shutdown server.
I'm not sure that it will be useful to you with your dp size.
I use DP=31 and for ex. rig 8x2080ti send file 1.6Gb every 2h, rig 6x2070 around 700mb every 2h
with your size DP=28 file should be 8 times more.
Any way if somebody interesting in app i can public here my code for purebasic.(for Windows OS x64)

I will be grateful for the code, because I don't think I will finish it :-(

Quote
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?

Yes, I use the original one. I will add an answer from -winfo work28.save
sr. member
Activity: 617
Merit: 312
May 24, 2020, 03:23:25 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
Maybe your clients in the same local network an you do not have any issues with internet.
All my clients in diiferent countries. So even 4 client connect i get server crash time to time.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:22:37 PM
Jean Luc - or anyone:

Precomp Table!

I want to do a precomputation of table, where only Tame Kangaroos are stored.

I tried tinkering with the current source code, but to no avail.

Do you know/can you tell me what I need to change in code or does it require a code overhaul?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:20:09 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.

Like I said earlier, mine may pause/crash twice a day. But I did catch one instance where one of the clients had a GPU issue, and maybe that caused it to stop.

I have had up to 39 clients working at one time. Now, once file gets so big, I stop server, merge, and start a new work file.
sr. member
Activity: 617
Merit: 312
May 24, 2020, 03:15:10 PM
All I use is Windows...using the original server/server code from Jean Luc.
I can say the same.. i use windows 10 and use release from github compliled by JeanLuc and server crashed.
there should be reason why for some it works fine while others breaks down.
There no matter how many ram which processor or so, in no case was there an anomaly in the consumption of resources.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:13:59 PM
Yes, the dump file currently has 4.64GB and it takes about 30 seconds to reboot when restarting, and about the time it takes to write).
If I set the record every three minutes - it's still 2 minutes of work which significantly increases the time. I am writing about this because to further levels this can be a bothersome problem.
I removed -w from the command line and we'll see what the effect will be.


EDIT:
soooo baaaad...


What config is your server? CPU, RAM, etc?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:11:44 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.

All I use is Windows...using the original server/server code from Jean Luc.
sr. member
Activity: 617
Merit: 312
May 24, 2020, 03:09:59 PM
Server is fine. I have over 8GB file that I am reading and rewriting to.
Not for windows. In windows server crashed without any error and randomly in time.
Only if there wiil be 1 connection server don`t crashed.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 24, 2020, 03:07:30 PM
Arg,
Hope there is no problem with a file bigger that 4GB !
The last save work printed a wrong size.
Do not restart clients and restart the server without the -w option.


Server is fine. I have over 8GB file that I am reading and rewriting to.
sr. member
Activity: 462
Merit: 696
May 24, 2020, 02:56:40 PM
@zielar
Did you use the original server or a modified version ?
Could you also do a -winfo on the save28.work ?
Jump to: