Pages:
Author

Topic: Pollard's kangaroo ECDLP solver - page 4. (Read 59389 times)

jr. member
Activity: 64
Merit: 1
34Sf4DnMt3z6XKKoWmZRw2nGyfGkDgNJZZ
June 27, 2024, 08:53:30 AM
I have a question
It is for people who know the Kangaroo program well.

Kangaroo program has a multiple pubkey search feature, but it searches sequentially. that is, key1 key2 key3. It does not search for key2 before key1 is found. Likewise, it does not search for key3 unless key2 is found. I AM GIVING AN EXAMPLE. We have created 500 pubkeys. Is it possible to search all of them at the same time? Is there anyone working on this? can you share?
member
Activity: 165
Merit: 26
June 08, 2024, 04:50:47 AM
Can anyone explain why a sloppy modular multiplication is used? The 3rd step is missing, otherwise there exist probability of an incorrect result. Or is it proven that only 2 steps were sufficient (I doubt it)?
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
June 08, 2024, 02:03:45 AM
Anyone knows if pubKeys of range 2^120 -2^121 have been saved somewhere? Even for lower ranges like 2^118-2^119 if available. I understand there might be Kangaroo work files already saved in this range? We are trialling something at the college and I remember reading about this but can't find the post anymore. Reward is possible for verified files.

you can subtract from  to 2^20 and divide 2^20 times to 2^20 and get 2^20 pubkeys one of them will be 2^100 if you talk about 2^120 pubkey.
newbie
Activity: 38
Merit: 0
June 07, 2024, 11:49:39 PM
Are there any places where save files are shared for puzzle #130 either using JeanLuc kangaroo or Etars kangaroo
jr. member
Activity: 44
Merit: 27
May 30, 2024, 10:54:30 AM
Im having a hard time understanding how distinguished points (DP) work. From what i know you only save the start point, trail length, and DP per a trail and start a new trail when a DP is found. You repeat this until you find a collision DP and some how find the "actual" collision of the pubkey.

I don't understand how finding and only storing DP would lead to finding the pubkey collision. I fully understand how the kangaroo method works by creating 2 maps to find a collision using the birthday paradox. but I'm not sure how you can do this without storing the whole set a points.
newbie
Activity: 9
Merit: 5
May 15, 2024, 05:23:51 AM
Anyone knows if pubKeys of range 2^120 -2^121 have been saved somewhere? Even for lower ranges like 2^118-2^119 if available. I understand there might be Kangaroo work files already saved in this range? We are trialling something at the college and I remember reading about this but can't find the post anymore. Reward is possible for verified files.
newbie
Activity: 6
Merit: 0
May 02, 2024, 05:44:40 PM
It is possible to reduce a little bit the complexity of the classic kangaroo algorithm by spreading the starting kangaroo in a non uniform maner.
Roughly speaking, if the private key lies in certain areas it will be found faster. As a counterpart if it lies in certain other areas it will be found slower.
But in average, considering the whole range, the compexity is well reduced.
It you read and understand carefully the reference in the gitbub page, you will find the trick Wink



First welcome back master @Jean_Luc .. so happy to see ur post.

You cant compare your deep knowledge to others, as you have developed those tools and have a deep knowledge in the Math used.

I consider my self good in programming (i have many years experience in C++ , but very new to GPU) .. i have a good Math background but that part of "elliptic curve cryptography" is another world.

So its not easy .. yes i have downloaded ur source code but that not easy at all to understand the magic happening inside.

I am learning a lot, and i must say many thanks for @WanderingPhilospher as he is a big help answering my doubts ...


I don't know if you do stuff in youtube, but i can imagine that some simple videos explaining some of those aspects will be gorgeous .. maybe i wish too much ..


Regards
newbie
Activity: 2
Merit: 0
April 23, 2024, 03:39:10 PM
...

I don't think anybody proficient enough for such a port would encounter your issue because they would take care of differences of Unix/Linux compared to Windows. In Linux OS mostly everything i a file and /dev/urandom is the non-blocking random number generating device (file). Reading from this file gives you not blocking random numbers.

You would have to port everything that's specific to Linux to the way it's done in Windows OS. Are you sure you have the knowledge to do this? Based on your question, I hightly doubt it.

I solved it and it's working now with VS2022 and CUDA 12.1. I made some mistakes in the linking.

My question was a little bit short, sorry
hero member
Activity: 714
Merit: 1010
Crypto Swap Exchange
April 23, 2024, 02:32:53 PM
...

I don't think anybody proficient enough for such a port would encounter your issue because they would take care of differences of Unix/Linux compared to Windows. In Linux OS mostly everything i a file and /dev/urandom is the non-blocking random number generating device (file). Reading from this file gives you not blocking random numbers.

You would have to port everything that's specific to Linux to the way it's done in Windows OS. Are you sure you have the knowledge to do this? Based on your question, I hightly doubt it.
newbie
Activity: 2
Merit: 0
April 22, 2024, 04:43:43 PM
Hi!

I migrated the kangaroo repo to VS2022 (windows), but I got an issue:
"Failed to open /dev/urandom No such file or directory"
Has anyone encountered this problem?
jr. member
Activity: 50
Merit: 7
April 19, 2024, 11:01:36 PM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁

RIP my SSD's on the distributed Kangaroo project xD
member
Activity: 43
Merit: 10
April 19, 2024, 07:36:18 PM
When we restore searching from a saved work file, it only shows the first key in input file after the end range, so what happens to other keys we input for search?

And if we have a speed of 1000MK/s, it would divide by the number of keys right? meaning half the speed is used to search for the second key if we input 2 keys? If that is the case then the expected time should double if there is more than 1 key.
Thus having 4 keys to search for a 130 bit key could take up to 2**68 operations, correct?

This program cannot search for multiple keys simultaneously.

It will go over each key in the input file, one at a time.

If a solution is found then it goes to the next.

Someone correct me if I'm wrong, but otherwise I think you need to use the -m parameter to give up the search and start with the next key

In a scenario where the solution could not be found within the given range.





jr. member
Activity: 50
Merit: 3
April 19, 2024, 09:45:09 AM
When we restore searching from a saved work file, it only shows the first key in input file after the end range, so what happens to other keys we input for search?

And if we have a speed of 1000MK/s, it would divide by the number of keys right? meaning half the speed is used to search for the second key if we input 2 keys? If that is the case then the expected time should double if there is more than 1 key.
Thus having 4 keys to search for a 130 bit key could take up to 2**68 operations, correct?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
April 10, 2024, 11:13:05 PM
Even if let's say #130 is barely possible by stretching limit and a lot of hashpower, but certainly we should start thinking of some ideas of improving further if we really have to attempt #135 plus.
I agree with you. I am currently searching for #130, but how can we improve any of the existing tools? We talking creating mathematical shortcuts such as division/subtraction, or making current programs faster, or both lol?!

I am definitely interested in all possibilities.

Also, I am a fan of your python coding, 10000%; really great job(s) on your github repos!
member
Activity: 499
Merit: 38
April 10, 2024, 01:55:09 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin

I meant when they become available through data centers and gpu servers.

I think there will be a long line to wait. 
newbie
Activity: 5
Merit: 0
April 10, 2024, 01:47:04 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin

I meant when they become available through data centers and gpu servers.
jr. member
Activity: 36
Merit: 68
April 10, 2024, 01:43:23 PM
Even if let's say #130 is barely possible by stretching limit and a lot of hashpower, but certainly we should start thinking of some ideas of improving further if we really have to attempt #135 plus.
member
Activity: 499
Merit: 38
April 10, 2024, 01:25:37 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin
member
Activity: 165
Merit: 26
April 04, 2024, 06:25:30 AM
You can merge once a day, or as often as you like. It's not hard, you just start the program with the merge commands and that's it.

115 was solved with this program/algo, using 256-384 GPUs that were getting 1,500-1,600 MKey/s, with a DP of 25, which created a master work file in excess of 300GB, in 13 days.

So no, I still do not get your point.
My point is that you're simply treating all of these merging and dumping like it's a no-op.

It's one thing to merge into some final 300 GB file and another to merge into a 600 GB / X TBs file.

Each merge (and I'm talking simply algorithmically) equals O(m + n) operations, where m and n is number of entries in each file.

So for some whatever desired DP throughput:
Higher DP = less storage but a lot of merges, more often writes (you want to use new data faster)
Lower DP = more storage but fewer number of merges (since you have more data in same amount of time)

At what point do you balance the throughput of all GPUs with the throughput of the time it takes to do the merging?

Because one of these two needs to be a bottleneck. You either end-up with the merging taking over most of the time, or with the GPU DP generator taking most of the time.

So total merge time = sum of the time of each merge. You can end up with big files taking a smaller amount of time to merge, than many small files merging a zillion times.

Maybe to illustrate my point better: are you willing to spend a lot of time searching for DPs and very so rarely write to a file and do the merging with some many other X many small files, or would you just write more data and do less merges? There is no good answer here, this is why you would want to look for a balance between the two, and not go to extremes just to prove a point. We already agreed that we can also just solve any puzzle with 2 kangaroos alone, hell, maybe with a DP of 64.

I experimented with direct indexing DP tables and even using DB files that simply store the DPs as a key, to speed up what you are calling a "merge", and it's still slow as hell. By comparison, file operations such as "fread" and "fwrite" are just a sure way to die of old age before a line by line comparison performed hundreds of times on multi-hundred GB files would finish. I will end this debate, anyone should form their own opinions by experimenting.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
April 03, 2024, 02:17:13 PM
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size.
You still didn't get my point. Let's revisit your example:
Keyspace - 129 bits
DP = 32

Let's pretend we do like 500 Mkeys/second. So we find a DP once every 8 seconds or so (one in 4 billion jumps).
Let's also go small and allocate 512 MB of RAM for all the bit gymnastics.

Will you wait 4 years and 3 months to write the DPs in the memory to a file? Because this is how long it will take before your allocated RAM gets filled.

No, you'll want to dump them sooner than that. This gets messy really quick, no matter how you look at it. Oh, you want to use 8 GB of RAM? Maybe 16? Oh shit, you may not live long enough to have that first dump even started. On the other hand, you also have almost zero chances to have a collision inside such a tiny amount of memory (I'm still talking about 16 GB, if not even more), so what do we do? Well, not hard to guess - dump data more often, I guess. And merge and merge and merge... maybe now you get my point.
You really do not understand what you are saying, really, at all.
Are you using a CPU and DDR1 RAM in your current PC?

Please, just use the program in a medium range so you can actually know how it treats DPs and files.

It is nothing for the server (or single CPU) to write to file, even with very low DP. But let's keep up with the ongoing DP 32.
Let's keep this simple and say the hashtable is full of 256 bits, for each entry, 128 for point and 128 for distance. Now let's calculate 10,000,000 entries.
32 bytes/entry×10,000,000 entries=320,000,000 bytes
This is equivalent to 320,000 kilobytes (KB), 320 megabytes (MB), or 0.32 gigabytes (GB)

However, I do not want to split the hashtable at .32 GB, I want a full GB before storing. So we will say I only want to save to file, every 30,000,000 DPs found.

With your example of 500 MKey/s, which is really slow for modern GPUs...but a DP is found and stored in RAM every 8 seconds. There are 86,400 seconds in a day, so that would mean every day, there would be 10,800 DPs found and stored in RAM. Now we wait until we have 30,000,000 DPs to write from RAM to the work file, how long will that take? 2,777 days? But yes, with only 1 card, it would take decades and decades to solve, so we run 256 GPUs getting 500 MKey/s. So now, every day, we should find 2,764,800 DPs a day, so that would mean every 11 days, we would write to file. However, like I said, 500 MKey/s is slow, so let's say that our GPUs get 1,500 MKey/s, a DP found every 3 seconds. So now each GPU finds 28,800 DPs a day. So 256 x 28,800 = 7,372,800 DPs a day, so every 4 days we are writing to file. You can adjust to whatever time frame or GB you want to with the program. Maybe you want to save every 12 hours, or every hour, doesn't matter. Now when you merge those smaller files, it obviously creates a larger file, so now when you go to merge the next batch of smaller files, you merge those with the larger file, so you do not have to keep merging, every file, every merge.

You act like dumping the data more often is a problem; maybe it is if you have the first PC known to man, trying to act as the server. Again, dump as much as your RAM holds in 1 file, or dump every hour, it does not matter, it does not get messy. It can be as clean as you want it/need it to be, or as "messy" as you want it to be.

You can merge once a day, or as often as you like. It's not hard, you just start the program with the merge commands and that's it.

115 was solved with this program/algo, using 256-384 GPUs that were getting 1,500-1,600 MKey/s, with a DP of 25, which created a master work file in excess of 300GB, in 13 days.

So no, I still do not get your point.
Pages:
Jump to: