Pages:
Author

Topic: Pollard's kangaroo ECDLP solver (Read 55183 times)

member
Activity: 808
Merit: 20
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Today at 12:21:31 AM
Hello


How to change base point in code ?


Thank you very mach.

full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 11, 2024, 12:13:05 AM
Even if let's say #130 is barely possible by stretching limit and a lot of hashpower, but certainly we should start thinking of some ideas of improving further if we really have to attempt #135 plus.
I agree with you. I am currently searching for #130, but how can we improve any of the existing tools? We talking creating mathematical shortcuts such as division/subtraction, or making current programs faster, or both lol?!

I am definitely interested in all possibilities.

Also, I am a fan of your python coding, 10000%; really great job(s) on your github repos!
member
Activity: 235
Merit: 12
April 10, 2024, 02:55:09 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin

I meant when they become available through data centers and gpu servers.

I think there will be a long line to wait. 
newbie
Activity: 3
Merit: 0
April 10, 2024, 02:47:04 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin

I meant when they become available through data centers and gpu servers.
jr. member
Activity: 29
Merit: 52
April 10, 2024, 02:43:23 PM
Even if let's say #130 is barely possible by stretching limit and a lot of hashpower, but certainly we should start thinking of some ideas of improving further if we really have to attempt #135 plus.
member
Activity: 235
Merit: 12
April 10, 2024, 02:25:37 PM
There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

You need GB200 NVL72 rack to solve 130. Find out there how much it costs.  Grin
newbie
Activity: 29
Merit: 3
April 04, 2024, 07:25:30 AM
You can merge once a day, or as often as you like. It's not hard, you just start the program with the merge commands and that's it.

115 was solved with this program/algo, using 256-384 GPUs that were getting 1,500-1,600 MKey/s, with a DP of 25, which created a master work file in excess of 300GB, in 13 days.

So no, I still do not get your point.
My point is that you're simply treating all of these merging and dumping like it's a no-op.

It's one thing to merge into some final 300 GB file and another to merge into a 600 GB / X TBs file.

Each merge (and I'm talking simply algorithmically) equals O(m + n) operations, where m and n is number of entries in each file.

So for some whatever desired DP throughput:
Higher DP = less storage but a lot of merges, more often writes (you want to use new data faster)
Lower DP = more storage but fewer number of merges (since you have more data in same amount of time)

At what point do you balance the throughput of all GPUs with the throughput of the time it takes to do the merging?

Because one of these two needs to be a bottleneck. You either end-up with the merging taking over most of the time, or with the GPU DP generator taking most of the time.

So total merge time = sum of the time of each merge. You can end up with big files taking a smaller amount of time to merge, than many small files merging a zillion times.

Maybe to illustrate my point better: are you willing to spend a lot of time searching for DPs and very so rarely write to a file and do the merging with some many other X many small files, or would you just write more data and do less merges? There is no good answer here, this is why you would want to look for a balance between the two, and not go to extremes just to prove a point. We already agreed that we can also just solve any puzzle with 2 kangaroos alone, hell, maybe with a DP of 64.

I experimented with direct indexing DP tables and even using DB files that simply store the DPs as a key, to speed up what you are calling a "merge", and it's still slow as hell. By comparison, file operations such as "fread" and "fwrite" are just a sure way to die of old age before a line by line comparison performed hundreds of times on multi-hundred GB files would finish. I will end this debate, anyone should form their own opinions by experimenting.
full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 03, 2024, 03:17:13 PM
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size.
You still didn't get my point. Let's revisit your example:
Keyspace - 129 bits
DP = 32

Let's pretend we do like 500 Mkeys/second. So we find a DP once every 8 seconds or so (one in 4 billion jumps).
Let's also go small and allocate 512 MB of RAM for all the bit gymnastics.

Will you wait 4 years and 3 months to write the DPs in the memory to a file? Because this is how long it will take before your allocated RAM gets filled.

No, you'll want to dump them sooner than that. This gets messy really quick, no matter how you look at it. Oh, you want to use 8 GB of RAM? Maybe 16? Oh shit, you may not live long enough to have that first dump even started. On the other hand, you also have almost zero chances to have a collision inside such a tiny amount of memory (I'm still talking about 16 GB, if not even more), so what do we do? Well, not hard to guess - dump data more often, I guess. And merge and merge and merge... maybe now you get my point.
You really do not understand what you are saying, really, at all.
Are you using a CPU and DDR1 RAM in your current PC?

Please, just use the program in a medium range so you can actually know how it treats DPs and files.

It is nothing for the server (or single CPU) to write to file, even with very low DP. But let's keep up with the ongoing DP 32.
Let's keep this simple and say the hashtable is full of 256 bits, for each entry, 128 for point and 128 for distance. Now let's calculate 10,000,000 entries.
32 bytes/entry×10,000,000 entries=320,000,000 bytes
This is equivalent to 320,000 kilobytes (KB), 320 megabytes (MB), or 0.32 gigabytes (GB)

However, I do not want to split the hashtable at .32 GB, I want a full GB before storing. So we will say I only want to save to file, every 30,000,000 DPs found.

With your example of 500 MKey/s, which is really slow for modern GPUs...but a DP is found and stored in RAM every 8 seconds. There are 86,400 seconds in a day, so that would mean every day, there would be 10,800 DPs found and stored in RAM. Now we wait until we have 30,000,000 DPs to write from RAM to the work file, how long will that take? 2,777 days? But yes, with only 1 card, it would take decades and decades to solve, so we run 256 GPUs getting 500 MKey/s. So now, every day, we should find 2,764,800 DPs a day, so that would mean every 11 days, we would write to file. However, like I said, 500 MKey/s is slow, so let's say that our GPUs get 1,500 MKey/s, a DP found every 3 seconds. So now each GPU finds 28,800 DPs a day. So 256 x 28,800 = 7,372,800 DPs a day, so every 4 days we are writing to file. You can adjust to whatever time frame or GB you want to with the program. Maybe you want to save every 12 hours, or every hour, doesn't matter. Now when you merge those smaller files, it obviously creates a larger file, so now when you go to merge the next batch of smaller files, you merge those with the larger file, so you do not have to keep merging, every file, every merge.

You act like dumping the data more often is a problem; maybe it is if you have the first PC known to man, trying to act as the server. Again, dump as much as your RAM holds in 1 file, or dump every hour, it does not matter, it does not get messy. It can be as clean as you want it/need it to be, or as "messy" as you want it to be.

You can merge once a day, or as often as you like. It's not hard, you just start the program with the merge commands and that's it.

115 was solved with this program/algo, using 256-384 GPUs that were getting 1,500-1,600 MKey/s, with a DP of 25, which created a master work file in excess of 300GB, in 13 days.

So no, I still do not get your point.
newbie
Activity: 29
Merit: 3
April 03, 2024, 02:45:22 PM
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size.
You still didn't get my point. Let's revisit your example:
Keyspace - 129 bits
DP = 32

Let's pretend we do like 500 Mkeys/second. So we find a DP once every 8 seconds or so (one in 4 billion jumps).
Let's also go small and allocate 512 MB of RAM for all the bit gymnastics.

Will you wait 4 years and 3 months to write the DPs in the memory to a file? Because this is how long it will take before your allocated RAM gets filled.

No, you'll want to dump them sooner than that. This gets messy really quick, no matter how you look at it. Oh, you want to use 8 GB of RAM? Maybe 16? Oh shit, you may not live long enough to have that first dump even started. On the other hand, you also have almost zero chances to have a collision inside such a tiny amount of memory (I'm still talking about 16 GB, if not even more), so what do we do? Well, not hard to guess - dump data more often, I guess. And merge and merge and merge... maybe now you get my point.
full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 02, 2024, 09:13:37 AM
Quote
1. Why is the file loaded into memory? You almost state you can just read/write from the file to do DP storage/lookup operations, so why is it read and operated in the RAM? Smiley
2. Would you say the solution (collision) was found during a file merge from 2 different X GB files, or after a GPU loop? If the latter, that would mean the hashmap file was scanned and written to, for each DP, which by all means sounds unfeasible, at least for the simple structure it uses.
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size. With higher DP size, you will have less DPs found and entered into hash table so you can write often or as seldom as you wish. With lower DPs, many will be found so you would need to write to file based on RAM constraints. If you are trying to solve a smaller bit range or a very large DP, then you do not have to use/create workfiles and the solution will be found via all RAM, i.e. all points and distances are kept in RAM until solution is found.

If you are using a server, then all clients send their DPs to it and it stores/writes to file all DPs found by each client, again, as often or as seldom as you wish. This is how Zielar did it I do believe. A server with many clients. And yes, he was merging the workfiles every so often (I am not sure how often) on the server side, to check for collision/solution.

The only I/O operations should just be on the server side. But again, it isn't writing to file every time a DP is found, you control how often or seldom. And with the wsplit option, you can tell the program to create a new workfile every x amount of minutes/hours/days etc. so you are only writing to disk once every x amount of time. Sometimes a solution is found within a single workfile, but most likely the solution will be found via the merging of these individual workfiles.

Exceeding the upperbounds with JLPs unmodded program is true, the whole 125 bit max. But there are other programs out there that basically mimic this program that have limits of 192-256.

No, I do not get your point. I was trying to show you that one does not need exabytes of RAM or storage to solve 130. One can solve with GBs or TBs (based on DP) and there isn't a massive I/O operation constantly going on. Of course it will take many GPUs to solve within a faster time frame, less if one is patient, but storage requirement is no where near 1 exabyte, to find a solution.


Quote
Considering those calculations and the comparison with 115#, what would be the neccessary requirements to at least "try" to solve 130# with let's say DP 27 in matter of storage and GPU power ?
AT best case scenario, with algo averages, you would need to store 2^38.55 DPs to solve 130 using DP 27, which would roughly equate into storage requirements somewhere in the neighborhood of 300+GB multiplied by 36.5, so almost 11 TBs if my math is correct...A lot, lol. Which there are many hard drives out there to store all of that, the problem would be trying to merge all of those workfiles. GPU power can be as few or as many, GPU power gives you a rough time frame to find a solution and plays into overall DPs (overhead, which does affect storage space size) as JLP talks about in his github readme.
newbie
Activity: 29
Merit: 3
April 02, 2024, 05:22:37 AM
Also, the jumps (each jump) aren't stored. All the kangaroos are jumping via jumps but only those that land on a DP (leading 0s with this program) are stored.

Those are the ones I was asking about.

I think for 115, it took a total of 2^58.36 total jumps and the workfile(s) contained 2^33.36 DPs (DP 25); and it took right around 13 days to solve. 130 at DP 32 would contain a little more than 115 at DP 25; but even if you doubled it, you are looking at 600+GB, triple it 1TB, quadrupled it 1.4 TB, etc. it is still no where close to 1 exabyte of storage required.

I took a look into Hashmap.cpp and header.

It is storing 16 bytes for X (out of full 32) and 16 bytes for distance (out of which 3 bits are used for other things as seen in header, so 125 bits).

In these conditions yes, you can fit 2^33.36 points in 300 GB. But is there a fine print about the possibility of hash collision on those missing 129 bits in the DP? You have a missing Y sign bit hint and 128 bits of lost information about the X coordinate. Are these factored in into the probability? I see nothing about this in the README.

There is also the 125-bit distance issue. So making "shortcuts" into whatever is stored affects some of the estimations, and in thie #115 case it is translated into exceeding the number of operations that were required (2^58.36). This was a trade-off between storage and time which I don't see documented.

Distances do not really increase in size, in the same range/group. They are spread out randomly at program start and then, for say 115, they have average jump size of somewhere around 2^57ish; so each jump is about 2^57, but inside a 2^114 range, so in order to really increase, each kangaroo would have to make more than 2^57 jumps (which would take a lifetime).

Since target range gets offset by target start, we can just think about a distance from 0 to....? and here's the catch: it really depends on how many jumps on average a kangaroo would need to make, in order to fill the DP table.

Yes, using a single kangaroo would basically traverse a lot of space to fill the DP table.
Using 2 kangaroos halves the distance, 256 kangaroos gets rid of 8 more bits of the distance, etc.

But yes, a larger range would have more hex characters stored for the distance (which would take up more storage space), compared to a smaller range, but the points (DPs) would all be the same size/length, regardless of the range. So 130 compared to 115 range would equal around 3-4 more hex characters per distance.

Yes, a distance of 0 should take up the same space as a distance of 2^max. So the individual entry size needs to increase, to accomodate.

There is research literature that tells you exactly how much a distance on average a kangaroo would require (and by effect how many jumps you need) if you have N kangaroos running in parallel. And if you look it up, it will exceed your range upper bound by some good bits.

If you look into the Hashmap.cpp (since you're a guru in this software), answer me these:

1. Why is the file loaded into memory? You almost state you can just read/write from the file to do DP storage/lookup operations, so why is it read and operated in the RAM? Smiley
2. Would you say the solution (collision) was found during a file merge from 2 different X GB files, or after a GPU loop? If the latter, that would mean the hashmap file was scanned and written to, for each DP, which by all means sounds unfeasible, at least for the simple structure it uses.

My best guess - it was found after a file merge, one after a LOT of file merges from N different backends, each using the available RAM for DP operations. So a lot of I/O stress there. If you extrapolate to #130 for whatever DP you wish for, no matter how small or large your files end up to be, you'd need to consider their I/O overhead. It's pointless to use a higher DP to save on storage unless you distribute your load across an enormous amount of servers and wait for a magical "master" merge of mini-files at some point to find your collision, which is in general agreement with what JLP stated: that you need several years on a render farm. Maybe now you somehow get my point.
newbie
Activity: 0
Merit: 0
April 02, 2024, 03:31:56 AM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.


It was close to JLPs program estimates, I want to say within 2.5%.

Also, the jumps (each jump) aren't stored. All the kangaroos are jumping via jumps but only those that land on a DP (leading 0s with this program) are stored.

I think for 115, it took a total of 2^58.36 total jumps and the workfile(s) contained 2^33.36 DPs (DP 25); and it took right around 13 days to solve. 130 at DP 32 would contain a little more than 115 at DP 25; but even if you doubled it, you are looking at 600+GB, triple it 1TB, quadrupled it 1.4 TB, etc. it is still no where close to 1 exabyte of storage required.

Distances do not really increase in size, in the same range/group. They are spread out randomly at program start and then, for say 115, they have average jump size of somewhere around 2^57ish; so each jump is about 2^57, but inside a 2^114 range, so in order to really increase, each kangaroo would have to make more than 2^57 jumps (which would take a lifetime).

But yes, a larger range would have more hex characters stored for the distance (which would take up more storage space), compared to a smaller range, but the points (DPs) would all be the same size/length, regardless of the range. So 130 compared to 115 range would equal around 3-4 more hex characters per distance.

Nice conversation between you two.
Considering those calculations and the comparison with 115#, what would be the neccessary requirements to at least "try" to solve 130# with let's say DP 27 in matter of storage and GPU power ?
full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 01, 2024, 11:32:52 PM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.


It was close to JLPs program estimates, I want to say within 2.5%.

Also, the jumps (each jump) aren't stored. All the kangaroos are jumping via jumps but only those that land on a DP (leading 0s with this program) are stored.

I think for 115, it took a total of 2^58.36 total jumps and the workfile(s) contained 2^33.36 DPs (DP 25); and it took right around 13 days to solve. 130 at DP 32 would contain a little more than 115 at DP 25; but even if you doubled it, you are looking at 600+GB, triple it 1TB, quadrupled it 1.4 TB, etc. it is still no where close to 1 exabyte of storage required.

Distances do not really increase in size, in the same range/group. They are spread out randomly at program start and then, for say 115, they have average jump size of somewhere around 2^57ish; so each jump is about 2^57, but inside a 2^114 range, so in order to really increase, each kangaroo would have to make more than 2^57 jumps (which would take a lifetime).

But yes, a larger range would have more hex characters stored for the distance (which would take up more storage space), compared to a smaller range, but the points (DPs) would all be the same size/length, regardless of the range. So 130 compared to 115 range would equal around 3-4 more hex characters per distance.
newbie
Activity: 29
Merit: 3
April 01, 2024, 10:34:19 AM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.

full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 01, 2024, 10:17:14 AM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
newbie
Activity: 29
Merit: 3
April 01, 2024, 10:08:24 AM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 01, 2024, 09:42:42 AM
Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
What exactly is wrong with my calculations, to be precise?... You were specifically asking something about #130 with a specific DP. Now you want to know why #115 is a manageable effort? OK...

Keyspace: 114 bits.
T size: 52 bits
W size: 52 bits
DP = 25, so Tdp size = 27 bits
Wdp size = 27 bits

Required memory for hashmap: 2*2**27 bits entries = 268 million entries * sizeof(jump entry)
Requires steps: 2*2**52 + numKang * 2**25
Probability(success) = 1 - (1 - 2**-52) ** (2**52) = 63%

I do not believe you understand how a hash map needs to work, in order to check for a collision.
In simplest terms, you will need some sort of a sorted list. Optimally, a balanced binary search tree. That one has O(log n) lookup and insert complexity. So, for a DP 25, that is filled with 2**28 entries, it requires 28 steps for a single lookup, IN AVEGAGE CASE scenario, when your search tree is balanced.

The memory requirement difference between #115 with DP 25 and #130 with DP 32 is orders of magnitudes higher!

It doesn't matter if you use a single CPU with few TB of RAM or many CPUs each with less RAM each. You will end up with the same problem of having to merge / query the DP "central table" / search tree / sorted list, call it whatever you want.

Good luck with your "files".
You are in this thread spouting non sense. That’s the issue. You have no clue as to how this program works.

You say solving 130 will require exabytes of RAM/storage, which is false. So I ask how many would it take for 115 with DP 25, how many exa or terabytes? You couldn’t answer in terms of bytes.

You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
newbie
Activity: 29
Merit: 3
April 01, 2024, 09:27:02 AM
Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
What exactly is wrong with my calculations, to be precise?... You were specifically asking something about #130 with a specific DP. Now you want to know why #115 is a manageable effort? OK...

Keyspace: 114 bits.
T size: 57 bits
W size: 57 bits
DP = 25, so Tdp size = 32 bits
Wdp size = 32 bits

Required memory for hashmap: 2*2**32 bits entries = 8 billion entries * sizeof(jump entry)
Requires steps: 2*2**57 + numKang * 2**25
Probability(success) = 1 - (1 - 2**-57) ** (2**57) = 63%

I do not believe you understand how a hash map needs to work, in order to check for a collision.
In simplest terms, you will need some sort of a sorted list. Optimally, a balanced binary search tree. That one has O(log n) lookup and insert complexity. So, for a DP 25, that is filled with 2**33 entries, it requires 33 steps for a single lookup, IN AVEGAGE CASE scenario, when your search tree is balanced.

The memory requirement difference between #115 with DP 25 and #130 with DP 32 is orders of magnitudes higher!

It doesn't matter if you use a single CPU with few TB of RAM or many CPUs each with less RAM each. You will end up with the same problem of having to merge / query the DP "central table" / search tree / sorted list, call it whatever you want.

Good luck with your "files".
full member
Activity: 1036
Merit: 219
Shooters Shoot...
April 01, 2024, 06:32:15 AM
I don't understand so deeply how kangaroo works but @Etar has created one distributed kangaroo.
Here are the links:
- https://github.com/Etayson/Server-Client-apps-for-Etarkangaroo
- https://github.com/Etayson/Etarkangaroo

If someone has the resources needed to run a 130# puzzle operation we can try to gather some gpu power.

Regarding what @WanderingPhilospher is asking to @kTimesG we can try to run this tool for 115# and see how long it takes and how many resources do we need/use. 
If so just PM me here.


No need to test #115, I already know the answer, I just wanted kTimesG to continue to put his foot in his mouth lol.

He really has no clue what he is saying nor understands this program. This program being the program that this thread is about. I like to have good laughs and was looking forward to his follow up answer but maybe he now realizes he is wrong and won't comment again lol.

As far as Etar's program, it is good and works and works up to 192 bits. The only drawback is it is written in PureBasic and many people who use Linux, struggle trying to compile it.
newbie
Activity: 0
Merit: 0
April 01, 2024, 05:14:48 AM
I don't understand so deeply how kangaroo works but @Etar has created one distributed kangaroo.
Here are the links:
- https://github.com/Etayson/Server-Client-apps-for-Etarkangaroo
- https://github.com/Etayson/Etarkangaroo

If someone has the resources needed to run a 130# puzzle operation we can try to gather some gpu power.

Regarding what @WanderingPhilospher is asking to @kTimesG we can try to run this tool for 115# and see how long it takes and how many resources do we need/use. 
If so just PM me here.
Pages:
Jump to: