Author

Topic: Pollard's kangaroo ECDLP solver - page 118. (Read 59389 times)

member
Activity: 144
Merit: 10
May 30, 2020, 09:34:43 AM

As relating to the Wild Kangaroos, [working_public_key] = [(original_public_key) - (beginning_range)*(secp256k1_generator_point)].
[distinguished_point] = [(+-traveled_distance)*(secp256k1_generator_point)] + [working_public key]

You will need to add back the (beginning_range) when there’s a collision to solve for the (original_public_key).

Ok. it is addidng point not addiding key + distance
(0xe6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04,0x125c04d29ea83874332ea8aef3b3467f22665a4970df415be756bcdf5675e569)
+ (fb12e2e7eba822db7582b91da81c0f1d991a6fec79d170733a1eceb039b3e1f9,ee2e79d5326d178c91ed36ca52f9be4f04c42e3cf7cabb3299e070bc1231bb05)
=(dcbae520622e89bd4c0062bb82400003c628f41e76f5bce8566c3dfa2c3fff0,b6fdc18b5be9048e837759b86efa422511b717ed9e7bc2d7b1936c06a0620cfe)
and x coordinate is correct,, but any way tame will never get this point becouse it is out of range 0..fffffffffffff
So not all wild can be tame. We can tame all wild with sign + but each wild with sign - should be verifid with range.
In that case we can add tamed wild to experience.


@Etar

I am not sure if that’s the case with Jean_Luc’s software, but with mine all Wild_Kangaroos can be converted to Tame_Kangaroos after the DL has been solved. I ran a small test and it was true for all cases:

Code:
iterations = 65536 (2^16)
bmul  size = 22
batch size = 128
total keys = 8388608

main context =   0.01402114
bmul context =  10.28601294

Baseline verification passed
Serial verification passed

Starting...

fffffffffffffffffffffffffffffffebaaebd0cde863851b153c3b3e54b842a
2855b4681021cbc04373860592de141d7e8686574f6edb7de522baf3a8eb0309
000001e66aa3c3ccbbe8735d6af80000078097436d9053b4552d8bb2c6491b15

fffffffffffffffffffffffffffffffebaaebcf57871bba71cababca3cfc6ba5
3c9bf5c08830cbd65ac8c4d551ddd8b77e93b56ba917fe70a6f9a7093f11907c
00000145e24ee029218b666ef7bc00000977a6faaf969829b43524b39c1594ae

fffffffffffffffffffffffffffffffebaaebcfdf0ee232d9145f3fe9d38ff41
85aa570afa81ba04d4a5cb000a90468caf95078cfedd7ca8379c32a9e5f2d615
000000688f0aa57f0950015258ac00000b30373c83ed1236f3fb177730a2117e

fffffffffffffffffffffffffffffffebaaebd0598056b9fc923ecc885a3bc20
c4c7a3af557b96e726a9377f7a37ca9bae18d06e16f11f859a95cb4409780f47
000001820963965f9c34ad90266400000d412f5abe971869764e15e03f8f8ad8

fffffffffffffffffffffffffffffffebaaebd0ada254af6931585f77bafcf4d
dd3d0d37ea59b4bbae97dcef111398a86bfd249dbf0ab536457c23f634d92aaa
0000017bdcfa4edd88fdf04dfd0c00000f16e754bc105657863055dd948487a1

fffffffffffffffffffffffffffffffebaaebcf25b5a4b020e63b0ce136fa282
f4cb6a2aa4fdace911883c0aa733af8447e06b5c56526f43569168b91936cb19
000000a4bf27ad623515dc0de7f8000016c74bbaa3098d7aa92ed4f331f12c95

fffffffffffffffffffffffffffffffebaaebcf176baa77561285fff4e219ad5
1af3f603f6edd7b10349afcfa13bbd63c1bdb99801e491b682184924dfc5c3c4
00000022f465e6f8b16d0f1c1ea000001706ad24024dd9fa0ce11c0adce08018

fffffffffffffffffffffffffffffffebaaebd015682cd2ae25d864d4169574c
d64b5bcb9724ed9a02dae2dac8ec17686f4b6f765c0534bff11888b928d08ec2
000001c2d572bdebd0f8831fd3e800001831690f67a08f7da03346684b117560

fffffffffffffffffffffffffffffffebaaebcf3d017caaa8c6822877b7f8688
13c9f7a86c88bc696fc6da053acf6568110acb4677620fef3bc7d5fd3800c2bb
000000866db879683704712e878400001ba7908f67561bdc006981cbbdcb15b1

fffffffffffffffffffffffffffffffebaaebcf5eb02df76cdf1112b2f85aa76
32be189e1d49409560a3e0273d973965b5742407caba96942af32f0b239a4bdb
000001be084cae57d8293270a5b400001c8467b521322e3635adf277d28d460b

.
.
.
member
Activity: 144
Merit: 10
May 30, 2020, 09:30:07 AM
Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
@WanderingPhilospher

The goal is ~512GiB worth of precomputed points (@ 16bytes/point).
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 30, 2020, 04:42:53 AM
The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.


Ok.

I think it's pretty stable. Only issues you've had is when someone tried to solve key with 80billion * sqrt160gajillion Mkey/s on your first server edition Smiley

You have developed an awesome program...extremely usable and stable.

I am sorry my coding skills are basically zero or I would help you more with different ideas.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 30, 2020, 04:42:39 AM
The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.



@Jean_Luck make please option so that clients can connect to the server at any time rather than all at the same time and join a task running on the server ?


Jean_Luck, server need a GPU, or server can work fine without GPU computations ?
Thanx !
sr. member
Activity: 462
Merit: 701
May 30, 2020, 04:36:12 AM
The new merger committed split and perform merge block by block.
So it does not need full RAM
I'm still working on it

For tame and wild separated array this is not yet supported but can be done.
At the moment, i'm working to have a usable and stable prgoram.

full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 30, 2020, 04:29:46 AM
@MrFreeDragon
Thanks for the link, was funny Wink

@patatasfritas
About the merger, yes, I know, I currently working on a low RAM usage version and the integration of your features.
concerning sug group, I didn't have a look yet but splitting range is not a good way of parallelization, you have only a gain of sqrt(n) where n is the number of subgroup.

Jean Luc,

all this and that with merging. There has to be a better way...with a comparator that I spoke about earlier.

Break the code down so that it saves multiple tame and wild files and build a comparator. Easy fix!

Currently, one has to merge files or base their DP setting on expected RAM usage. So if I only have 48GB RAM, that will dictate the lowest DP that I can use because now we have one huge master save file.

Let's say my master save file is 20GB and I need to merge it with another 4GB file (now a total of 24GB). My RAM will be nearing the 50 percent threshold that you mentioned.

But now, let's say you save 2 Tame files and 4 Wild files. Now we have 6 files and each Tame file is now only 6GB and each Wild file is now only 3 GB. Now when I compare to look for solved key, my machine reads the 1st Tame file (6GB) and the 1st Wild file (3GB) now I am only consuming 9GB. The machine compares, no collision found, now compare 1st Tame file with 2nd Wild file. Repeat until 1st Tame file has been compared to each Wild file and the 2nd Tame file has been compared to each Wild file.

Would this not work?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 30, 2020, 03:31:57 AM
True or False?

Let's say for any of the puzzles, #110, 115, 120, etc. people can use various DP settings, meaning people can search for different DPs.

just for conversation sake, Let's say the DPs range from 14 to 48.

For each of those DPs, 14 to 48, the key can be solved, true or false?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 30, 2020, 03:22:43 AM
If anyone is interested...

Looking for some people to partner up to attack the rest of the puzzles.

We take a range, divide it up (for #115, if there were 4 people, person 1 = 400...4FF..., person 2 = 500...5FF..., etc.) and split BTC when solved.

I have 2^28.5 kangaroos ready for the hunt. Hopefully other people have about the same or we divide ranges up based on kangaroo power?

Etar had a good idea, but I would have no clue how to set it up though.
Split range it is bad idea. kangaroo never say that there no key, you can check this range even with 4sqrt(n) and break opereation after a lot of operations
but this does not guarantee that there is no key. And in that case you will ose chance to get prize forever.
So attack it like you had mentioned? Central server collects all DPs, and people paid out based on percentage of DPs their machines sent to central server? I said you had a good idea, I just don't know how to set that up where it's a. efficient, b. reliable
sr. member
Activity: 642
Merit: 316
May 30, 2020, 03:19:36 AM
If anyone is interested...

Looking for some people to partner up to attack the rest of the puzzles.

We take a range, divide it up (for #115, if there were 4 people, person 1 = 400...4FF..., person 2 = 500...5FF..., etc.) and split BTC when solved.

I have 2^28.5 kangaroos ready for the hunt. Hopefully other people have about the same or we divide ranges up based on kangaroo power?

Etar had a good idea, but I would have no clue how to set it up though.
Split range it is bad idea. kangaroo never say that there no key, you can check this range even with 4sqrt(n) and break opereation after a lot of operations
but this does not guarantee that there is no key. And in that case you will lose chance to get prize forever.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 30, 2020, 03:14:45 AM
If anyone is interested...

Looking for some people to partner up to attack the rest of the puzzles.

We take a range, divide it up (for #115, if there were 4 people, person 1 = 400...4FF..., person 2 = 500...5FF..., etc.) and split BTC when solved.

I have 2^28.5 kangaroos ready for the hunt. Hopefully other people have about the same or we divide ranges up based on kangaroo power?

Etar had a good idea, but I would have no clue how to set it up though.
sr. member
Activity: 642
Merit: 316
May 30, 2020, 03:07:23 AM

As relating to the Wild Kangaroos, [working_public_key] = [(original_public_key) - (beginning_range)*(secp256k1_generator_point)].
[distinguished_point] = [(+-traveled_distance)*(secp256k1_generator_point)] + [working_public key]

You will need to add back the (beginning_range) when there’s a collision to solve for the (original_public_key).

Can you explain me this, please.
Searching key 0xa123fe3456
Searching pub key 0xe6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04125c04d29ea83 874332ea8aef3b3467f22665a4970df415be756bcdf5675e569
range 0..fffffffffffff  (so there no shifting and working pubkey=original pubkey)
Look to hashtable..
x=0x7760a4827fcb4d02210c4fb962f48c49
d=0x40000000000000000006a6bdf014bd68
that mean that type wild and sign +
ok let`s verify (0x0a123fe3456+0x6a6bdf014bd68)*G = 0xd4814ad2a48ec5f0f1fdce8832800007760a4827fcb4d02210c4fb962f48c49 result corrrect
let`se other DP
x=0x3c628f41e76f5bce8566c3dfa2c3fff0
d=0xc000000000000000000617445c562205
that mean that type wild and sign -
But 0x617445c562205>0x0a123fe3456.. And here is question how it can be that key - distance is out of range???
Ok. it is addidng point not addiding key + distance
(0xe6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04,0x125c04d29ea83874332ea8aef3b3467f22665a4970df415be756bcdf5675e569)
+ (fb12e2e7eba822db7582b91da81c0f1d991a6fec79d170733a1eceb039b3e1f9,ee2e79d5326d178c91ed36ca52f9be4f04c42e3cf7cabb3299e070bc1231bb05)
=(dcbae520622e89bd4c0062bb82400003c628f41e76f5bce8566c3dfa2c3fff0,b6fdc18b5be9048e837759b86efa422511b717ed9e7bc2d7b1936c06a0620cfe)
and x coordinate is correct,, but any way tame will never get this point becouse it is out of range 0..fffffffffffff
So not all wild can be tame. We can tame all wild with sign + but each wild with sign - should be verifid with range.
In that case we can add tamed wild to experience.

sr. member
Activity: 462
Merit: 701
May 29, 2020, 11:39:41 PM
@MrFreeDragon
Thanks for the link, was funny Wink

@patatasfritas
About the merger, yes, I know, I currently working on a low RAM usage version and the integration of your features.
concerning sug group, I didn't have a look yet but splitting range is not a good way of parallelization, you have only a gain of sqrt(n) where n is the number of subgroup.


full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 29, 2020, 08:48:04 PM
I really like this race Cheesy
HardwareCollector or Zielar ?

There is a high probability that if we combine HardwareCollector's and Zielar's working files there will be the required collision.

There is also a way to check it right now without sharing the whole files - only the information with X-coordinates and kangaroo type should be shared (excluding distances), and if we have the same X-coordinate with different types (wild and tame) so we can confirm that the key is found.

HWCollector and Zielar, do you want me to perform such check for you?  Wink
Thanks for the offer, but I’ve not started yet as I am giving time for Zielar to solve the challenge first. He has poured a lot of resources to it already, and out of common courtesy, I will begin late Sunday night assuming that he’s hasn’t solved it yet. But I am working on precomputation for the 115-bit private key challenge.  Grin
Hardware, how long are you running your precomp for? You have a set # of DPs or a percentage? 2/3 ops?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 29, 2020, 07:38:17 PM
Jean Luc...your Kangaroo is now a fast solver...especially for the lower bits, 90 on down. I know the higher the bits the longer it takes to solve. But what can you do from this point forward to make solving the higher bits, optimized/faster?

In all the articles and research papers I have read, well most of them, they talk about subgroups. Can this not be done?

Example, if we are trying to solve a key for the range 10000:1FFFF, currently we can only use the exact range started with for the same key. Can we not setup hash and jump table for 10000:1FFFF (precomp of sorts) and then attack the range with different starting points?

Example:
1 - 10000:1FFFF
2 - 11000:1FFFF
3-  12000:1FFFF
....
10- 1A000-1FFFF

or attack in smaller bits such as

100FF-1100
101FF-1200
etc?

My PC alone can solve 64 bit in 1 minute...what if we randomly generate 64 bit (or whatever bit number desired) inside the larger bit range and use the
-m option to stop the search in this section and move to the next randomly generated 64 bit piece. or better yet, make it a sequential 64 bit search inside of a 110 bit range. With numerous GPUs, you could assign each one a different range so the sequential piece is sped up.
Example:
gpu 1- attacking 10000-11000 in smaller sequential bits
gpu 2- attacking 11001-12000 in smaller sequential bits
etc

Anyone, thoughts?

I think if you fined key in 64 bit, you not need next 64 bit, but if you fined only on 3*64 bits false and after only 1  BINGO 64 bit, this is I  think not = 256 bytes key and fourth 64 bits with private key will be not 256 bytes key too.........

the 64 bit doesn't need to equal anything...
it could be a sequential 40, 41, 42, 50, 56, 72, 80, etc bit range. The object is to check the smaller ranges (subgroups?) inside a larger range for the same key.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
May 29, 2020, 07:15:09 PM
Jean Luc...your Kangaroo is now a fast solver...especially for the lower bits, 90 on down. I know the higher the bits the longer it takes to solve. But what can you do from this point forward to make solving the higher bits, optimized/faster?

In all the articles and research papers I have read, well most of them, they talk about subgroups. Can this not be done?

Example, if we are trying to solve a key for the range 10000:1FFFF, currently we can only use the exact range started with for the same key. Can we not setup hash and jump table for 10000:1FFFF (precomp of sorts) and then attack the range with different starting points?

Example:
1 - 10000:1FFFF
2 - 11000:1FFFF
3-  12000:1FFFF
....
10- 1A000-1FFFF

or attack in smaller bits such as

100FF-1100
101FF-1200
etc?

My PC alone can solve 64 bit in 1 minute...what if we randomly generate 64 bit (or whatever bit number desired) inside the larger bit range and use the
-m option to stop the search in this section and move to the next randomly generated 64 bit piece. or better yet, make it a sequential 64 bit search inside of a 110 bit range. With numerous GPUs, you could assign each one a different range so the sequential piece is sped up.
Example:
gpu 1- attacking 10000-11000 in smaller sequential bits
gpu 2- attacking 11001-12000 in smaller sequential bits
etc

Anyone, thoughts?

I think if you fined key in 64 bit, you not need next 64 bit, but if you fined only on 3*64 bits false and after only 1  BINGO 64 bit, this is I  think not = 256 bytes key and fourth 64 bits with private key will be not 256 bytes key too.........
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 29, 2020, 06:41:23 PM
Jean Luc...your Kangaroo is now a fast solver...especially for the lower bits, 90 on down. I know the higher the bits the longer it takes to solve. But what can you do from this point forward to make solving the higher bits, optimized/faster?

In all the articles and research papers I have read, well most of them, they talk about subgroups. Can this not be done?

Example, if we are trying to solve a key for the range 10000:1FFFF, currently we can only use the exact range started with for the same key. Can we not setup hash and jump table for 10000:1FFFF (precomp of sorts) and then attack the range with different starting points?

Example:
1 - 10000:1FFFF
2 - 11000:1FFFF
3-  12000:1FFFF
....
10- 1A000-1FFFF

or attack in smaller bits such as

100FF-1100
101FF-1200
etc?

My PC alone can solve 64 bit in 1 minute...what if we randomly generate 64 bit (or whatever bit number desired) inside the larger bit range and use the
-m option to stop the search in this section and move to the next randomly generated 64 bit piece. or better yet, make it a sequential 64 bit search inside of a 110 bit range. With numerous GPUs, you could assign each one a different range so the sequential piece is sped up.
Example:
gpu 1- attacking 10000-11000 in smaller sequential bits
gpu 2- attacking 11001-12000 in smaller sequential bits
etc

Anyone, thoughts?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
May 29, 2020, 04:16:53 PM
I was testing "directory merge" function and RAM memory is quickly exhausted. I was thinking that I forgot to free temp HashTable in each reading iteration; but I changed the code and the problem remains Sad
The merged saveFile is 5GB, and in merge process takes up about 14GB of RAM.

I think the more obvious solution is to sort files from bigger to smallest when are merged; or use only small saveFiles.


On the other hand, the -ws flag I think is problematic when using -wsplit, generating larger files than necessary. Do you think it is interesting to separate the DP and the kangaroos into different save files?


As next improvements, I will work on improving the export of the DPs and the possibility of modifying the DP bits in a save file to reduce its size if we have chosen a too low DP value. It can also be interesting to remove from a save file the distances to share it without gifting the prize.

I tested dir merge on PC with 24GB RAM and 10 dir files that were probably 500MB a piece but I didn't check the RAM usage.

Alek76's version is similar to what you are talking about as far as separating files. He has (in current version) 8 text files that are generated, 4 tames and 4 wild. I modified it a little bit and used 2 tames and 4 wilds. Then, he has a python comparator that compares all the files to check for a solved key. I tried/trying to figure out how to merge that with JP's (this) version, but can't figure out how to read the files well enough to understand how to build the python comparator.
member
Activity: 330
Merit: 34
May 29, 2020, 03:23:32 PM
Anybody can explain why tame DP shifted to zero?
For test i use pubkey 04e6dabff2705a80acc23ae121956873c4ff9fd31cb0faca522c33624e23657e04125c04d29ea83 874332ea8aef3b3467f22665a4970df415be756bcdf5675e569
range ffff...fffffffffffff  -dp 4
when i look to hashtable i see this
x: 5311104a8554e94684e07e9d8c0d112f
d: 0000000000000000000589fd3365a64e
Before i was think that programm add begin range to tame DP, but i see now that there no addiding.
becouse when 0000000000000000000589fd3365a64e * G get 6b4599cecd305b927a266d311d800005311104a8554e94684e07e9d8c0d112f and this is our x
In this case i have a question for what distance for ex.2AA need if range start from ffff Huh
ok, when we will start range from for ex. 2^109 in that case all distance before will be useless?
becouse they are will produce x-coordinates that is before range 2^109.
I do not understand this moment..

Because the Tame Kangaroos are dependent only on the interval size, while the Wild Kangaroos are dependent on the interval size and the public key. We want to keep the algorithm as generic as possible, and also the ability to reuse the Tame Kangaroos for multiple key searches.

As relating to the Wild Kangaroos, [working_public_key] = [(original_public_key) - (beginning_range)*(secp256k1_generator_point)].
[distinguished_point] = [(+-traveled_distance)*(secp256k1_generator_point)] + [working_public key]

You will need to add back the (beginning_range) when there’s a collision to solve for the (original_public_key).


" and also the ability to reuse the Tame Kangaroos for multiple key searches. "

can we aspect multi pubkeys support coming, maybe yes or maybe no, its all depand dev thinking, maybe he prefer to work a lot more calc for client/server, or maybe he he think add this func and then optimize all thing togather
sr. member
Activity: 642
Merit: 316
May 29, 2020, 02:53:08 PM
-snip-
been seeing this one.
1:30:21  [GETWORK] INVALID CRC32 FILE > NEED:ffffffffd52c4232, GOT:79ffe57 is this still ok. thanks man, great piece.
It is not a problem, if CRC32 invalid, that file will be send in next time.
full member
Activity: 431
Merit: 105
May 29, 2020, 02:40:12 PM
If someone wants to run a solver with small DPs, but the server’s resources don’t allow it, then you can use the -wsplit option,
which appears in version 1.7 of the solver.
But in any case, you must have a PC that can merge files. I just had such a problem.
Now I can safely merge files on my home PC. In order not to do it all manually, you need a grabber and a merger.
File grabber is launched on the server, merger is launched on the home PC.
Merger communicates with the grabber and requests files from him. The graber sends, if any, and then removes them from the server.
The merger, in turn, after receiving the file from the grabber, starts the merge process, during which it is possible to find the key, after merge temp file deleted.
Grabber gives files only to an authorized merger.
If it helps someone, archive with source codes, compiled programs and example .bat files: https://drive.google.com/file/d/1wQWLCRsYY2s4DH2OZHmyTMMxIPn8kdsz
Edit: fixed little memory leak at grabber side.

As before, the sources under Purebasic.
mergeServer(grabber):
Code:
-pass >password for merger authorization
-port >listening port, where merger will be connect
-ext  >by this extension, the grabber will search for files in the folder,
       for ex. using -ext part, than you should start server with -w xxxx.part
mergeClient(merger):
Code:
-jobtime 60000>request a file from the grabber every 60s
-name >it is name of your merger, useless, just for stats
-pass >password for authorization(should be the same as in grabber)
-server >host:port grabber
-workfile >name of your masterfile
-merger >Kangaroo.exe by default
been seeing this one.
1:30:21  [GETWORK] INVALID CRC32 FILE > NEED:ffffffffd52c4232, GOT:79ffe57 is this still ok. thanks man, great piece.
Jump to: