Author

Topic: Pollard's kangaroo ECDLP solver - page 123. (Read 59389 times)

full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 27, 2020, 06:35:15 AM
@JeanLuc new option -wsplit is save kangaroos in work file? Because if not save i don`t know how can be solved key during merge masterfile and small workfile..
If kangaroo saved and we can solve key during merge it is good.
Other question if server restart work from scratch i mean with -wsplit i need to use old masterfile with -i?
What i want..
I chose dp = 31 for not knowing, now I want to fix the situation.
I want download masterfile to my home pc, that much powerfull than server and has more RAM.
Set new DP=28 because i have totaly 2^ 26.5 kangaroos (of course, I’ll have to look for DP again, but what can I do, because of lowering the DP I will lose the required number of points)
Launch server from scratch with -wsplit and -d 28 (and here the question arises, but will clients update their configuration or do they need to be restarted? )
than time to time download work files to myPC and merge localy. It is corect way?



Drop your -dp down to 25 and I'll share my master save file with you. 8.6GB file
sr. member
Activity: 462
Merit: 701
May 27, 2020, 06:34:13 AM
If you use -wsplit on classic mode (no client/server) yes the kangaroo will be integrated in the file and in case of crash you can restart with the last saved file which will contains kangaroo.
If you change the config of the server, yes you need to restart all clients.

To solve a key, kangaroos are not needed only DP in the HashTable and work file header are needed.
The fact to save kangaroo (-ws) avoid path breaking, and creation of new kangaroo.
I forgot to enable -ws for client mode, I ll add this ASAP.
sr. member
Activity: 642
Merit: 316
May 27, 2020, 06:23:11 AM
@JeanLuc new option -wsplit is save kangaroos in work file? Because if not save i don`t know how can be solved key during merge masterfile and small workfile..
If kangaroo saved and we can solve key during merge it is good.
Other question if server restart work from scratch i mean with -wsplit i need to use old masterfile with -i?
What i want..
I chose dp = 31 for not knowing, now I want to fix the situation.
I want download masterfile to my home pc, that much powerfull than server and has more RAM.
Set new DP=28 because i have totaly 2^ 26.5 kangaroos (of course, I’ll have to look for DP again, but what can I do, because of lowering the DP I will lose the required number of points)
Launch server from scratch with -wsplit and -d 28 (and here the question arises, but will clients update their configuration or do they need to be restarted? )
than time to time download work files to myPC and merge localy. It is corect way?

sr. member
Activity: 462
Merit: 701
May 27, 2020, 05:45:15 AM
Ok, I could read the whole work file successfully. My mistake was I read 8 bytes per nbItem and 8 bytes per maxItem, but should 4 bytes.
nbItem is the number of entries within the block. Why do we need maxItem value? What does it show?

maxItem is used by the HashTable class for the allocation. It is the same thing as the capacity in a standard c++ vector.
It is not a useful information to process to the HashTable.
sr. member
Activity: 443
Merit: 350
May 27, 2020, 05:19:52 AM
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?
After header hashtable located:
this hash table like blocks, totaly there 262144 blocks
each block have structure like this:
[nbItem = 4b
maxItem = 4b
than array of elements equil to nbItem
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance),
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)
......
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)]


Ok, I could read the whole work file successfully. My mistake was I read 8 bytes per nbItem and 8 bytes per maxItem, but should 4 bytes.
nbItem is the number of entries within the block. Why do we need maxItem value? What does it show?
sr. member
Activity: 462
Merit: 701
May 27, 2020, 02:20:14 AM
But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit)
because app should download huge workfile and add new small work file.

That's right but here if the system swap it will be less disaster than swapping on the server host.
zielar said that he has enough RAM for DP22 so it should be ok with DP23.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 27, 2020, 02:18:41 AM
-snip-
This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.

But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit)
because app should download huge workfile and add new small work file.

Same thing I was wondering...the amount of RAM you have, or your server has, may dictate your -dp and expected RAM. Example, if I only have 64GB of RAM I may choose my -dp based off of that and go with a -dp that is expected to create a 32-40 GB file. That way it will always be less than my overall RAM.
sr. member
Activity: 642
Merit: 316
May 27, 2020, 02:06:00 AM
-snip-
This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.

But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit)
because app should download huge workfile and add new small work file.
sr. member
Activity: 462
Merit: 701
May 27, 2020, 01:40:49 AM
Yes try -wsplit.
This will also block clients during backup and hashtable cleanup. It should solve the overload.
full member
Activity: 282
Merit: 114
May 27, 2020, 01:25:05 AM
Server is very overloaded using this dp. My file after 5h have 45GB 🙄 CUDA 10.2 is not problem because i use that version in all releases before. Maybe -wsplit solve my problem and this hard level finally. best of all - I put the server specially on Google to get the right amount of RAM and performance, and all 80 processors have 100% consumption and the time since launch refreshes almost every few minutes
sr. member
Activity: 462
Merit: 701
May 27, 2020, 12:04:25 AM
I switched to CUDA10.2 at the 1.6. can it be the reason of your keyrate issue ? Wrong driver ?
sr. member
Activity: 462
Merit: 701
May 26, 2020, 11:58:31 PM
I uploaded the new release with the -wsplit option.
IMHO, this is a great option.
It does not prevent to solve the key even if the hashtable is reseted at each backup because paths continue and collision may occur in the small hashtable.
Of course merging offline should solve before.

On the little test i did (reset every 10seconds, DP10), the server solved the 64bit key in 1:41
The merge solved after 1:12

Code:
[Client 0][Kang 2^-inf][DP Count 2^-inf/2^23.05][Dead 0][04s][2.0/4.0MB]
New connection from 127.0.0.1:58358
[Client 1][Kang 2^18.58][DP Count 2^-inf/2^23.05][Dead 0][08s][2.0/4.0MB]
New connection from 172.24.9.18:52090
[Client 2][Kang 2^18.61][DP Count 2^16.17/2^23.05][Dead 0][10s][4.2/14.1MB]
SaveWork: save.work_27May20_063455...............done [4.2 MB] [00s] Wed May 27 06:34:55 2020
[Client 2][Kang 2^18.61][DP Count 2^20.25/2^23.05][Dead 0][20s][40.1/73.9MB]
SaveWork: save.work_27May20_063505...............done [40.1 MB] [00s] Wed May 27 06:35:06 2020
[Client 2][Kang 2^18.61][DP Count 2^20.17/2^23.05][Dead 0][30s][37.9/71.5MB]
SaveWork: save.work_27May20_063516...............done [37.9 MB] [00s] Wed May 27 06:35:16 2020
[Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][41s][48.9/82.8MB]
SaveWork: save.work_27May20_063526...............done [48.9 MB] [00s] Wed May 27 06:35:27 2020
[Client 2][Kang 2^18.61][DP Count 2^20.29/2^23.05][Dead 0][51s][41.1/74.9MB]
SaveWork: save.work_27May20_063537...............done [41.1 MB] [00s] Wed May 27 06:35:37 2020
[Client 2][Kang 2^18.61][DP Count 2^20.30/2^23.05][Dead 0][01:02][41.5/75.2MB]
SaveWork: save.work_27May20_063547...............done [41.5 MB] [00s] Wed May 27 06:35:48 2020
[Client 2][Kang 2^18.61][DP Count 2^20.28/2^23.05][Dead 0][01:12][40.9/74.6MB]
SaveWork: save.work_27May20_063558...............done [40.9 MB] [00s] Wed May 27 06:35:58 2020  <= offline merge solved there
[Client 2][Kang 2^18.61][DP Count 2^20.19/2^23.05][Dead 0][01:22][38.5/72.2MB]
SaveWork: save.work_27May20_063608...............done [38.5 MB] [00s] Wed May 27 06:36:08 2020
[Client 2][Kang 2^18.61][DP Count 2^20.55/2^23.05][Dead 0][01:33][48.8/82.7MB]
SaveWork: save.work_27May20_063618...............done [48.8 MB] [00s] Wed May 27 06:36:19 2020
[Client 2][Kang 2^18.61][DP Count 2^19.98/2^23.05][Dead 0][01:41][33.5/66.8MB]
Key# 0 [1S]Pub:  0x03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4
       Priv: 0x5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB

Concerning your hashrate problem I don't see any trivial reason, do you see timeout or error message at the client level, may be the server is a bit overloaded ?
I'm investigating...
full member
Activity: 282
Merit: 114
May 26, 2020, 11:34:22 PM
Many thanks... Finally my problem with low hashrate  back again. i see that this problem not have place one release back than your release from yesterday, so your last changes must be a problem reason
sr. member
Activity: 462
Merit: 701
May 26, 2020, 11:17:39 PM
Yeah, i turn off because a work file is visible bigger on realtime.
I would try your suggestion and give you opinion later.
I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity  on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁

OK, I'm adding a -wsplit option to the server, it will reset the hashTable at each backup and save to fileName + timestamp. eg save39.work_27May20_061427.
This will decrease RAM usage, improve server performance for insertion. The merge can be done offline without stopping the server.
full member
Activity: 282
Merit: 114
May 26, 2020, 11:16:54 PM
problem is gone after close connection from one machine with 1251 dead kangaroo after two hours.. from two hours i see 0 dead, after that all work perfectly.
full member
Activity: 282
Merit: 114
May 26, 2020, 11:01:20 PM
Yeah, i turn off because a work file is visible bigger on realtime.
I would try your suggestion and give you opinion later.
I have 2^30.08/2^32.55 now so it is bad moment again to make changes in source, but i would try with -g. On 10 machines my hashrate was down do 200mkeys/s and no workload activity  on GPUS in nvidia-smi... Still connected .. what can be reason? Hashrate real would be ~ 13000mkeys , not 200😁
sr. member
Activity: 462
Merit: 701
May 26, 2020, 10:03:23 PM
I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I

Wow 2^33.08 kangaroo ! With DP23, the overhead is still a bit large.
Why do you turn off the server ? The work file is too big ?
If I was you, I would reduce the grid size of the GPUs and/or reduce the GPU_GRP_SIZE to 64.
By reducing the gridx and gridy by 2 and the GPU_GRP_SIZE to 64 you will have 2^30 kangaroo and will be nice with dp23.
You will probably loose in performance. Make a test on a single GPU of each type to see what is the performance with reduced grid and GPU_GRP_SIZE.
You can also engage less machine and try to see what is the best trade off.

Yes if you turn off the server, at the reconnection, the kangaroo are not counted, i will fix this.

Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?

Yes the 0 are the HASH entry header, if you have lots of 0, the hash table is not very filled.
As mentioned above, you can have a look at the HashTable::SaveTable() function to understand the format.
jr. member
Activity: 30
Merit: 122
May 26, 2020, 07:11:28 PM
A dozen out of a few hundred machines? Smiley
full member
Activity: 282
Merit: 114
May 26, 2020, 04:51:59 PM
I switched all clients to the new scan with dp "23".
I saw the number of kangaroos in the counter (probably 2^33.08), but I do not remember, because after turning off the server to change the save file - again I see 2^inf only, so I have no idea, but something about this value.

Two hours of operation (when connecting more clients) resulted in:
sr. member
Activity: 642
Merit: 316
May 26, 2020, 04:25:31 PM
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).

Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes....
Can you just briefly describe the hash table structure which is saved to binary file?
After header hashtable located:
this hash table like blocks, totaly there 262144 blocks
each block have structure like this:
[nbItem = 4b
maxItem = 4b
than array of elements equil to nbItem
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance),
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)
......
Xcoordinates = 16b
d = 16b  (127bit sign, 126bit type(TAME 0,WILD 1), 125-0 distance)]
Jump to: