Pages:
Author

Topic: Pollard's kangaroo ECDLP solver - page 2. (Read 56355 times)

full member
Activity: 1050
Merit: 219
Shooters Shoot...
April 01, 2024, 11:32:52 PM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.


It was close to JLPs program estimates, I want to say within 2.5%.

Also, the jumps (each jump) aren't stored. All the kangaroos are jumping via jumps but only those that land on a DP (leading 0s with this program) are stored.

I think for 115, it took a total of 2^58.36 total jumps and the workfile(s) contained 2^33.36 DPs (DP 25); and it took right around 13 days to solve. 130 at DP 32 would contain a little more than 115 at DP 25; but even if you doubled it, you are looking at 600+GB, triple it 1TB, quadrupled it 1.4 TB, etc. it is still no where close to 1 exabyte of storage required.

Distances do not really increase in size, in the same range/group. They are spread out randomly at program start and then, for say 115, they have average jump size of somewhere around 2^57ish; so each jump is about 2^57, but inside a 2^114 range, so in order to really increase, each kangaroo would have to make more than 2^57 jumps (which would take a lifetime).

But yes, a larger range would have more hex characters stored for the distance (which would take up more storage space), compared to a smaller range, but the points (DPs) would all be the same size/length, regardless of the range. So 130 compared to 115 range would equal around 3-4 more hex characters per distance.
jr. member
Activity: 40
Merit: 6
April 01, 2024, 10:34:19 AM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.

full member
Activity: 1050
Merit: 219
Shooters Shoot...
April 01, 2024, 10:17:14 AM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
jr. member
Activity: 40
Merit: 6
April 01, 2024, 10:08:24 AM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
April 01, 2024, 09:42:42 AM
Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
What exactly is wrong with my calculations, to be precise?... You were specifically asking something about #130 with a specific DP. Now you want to know why #115 is a manageable effort? OK...

Keyspace: 114 bits.
T size: 52 bits
W size: 52 bits
DP = 25, so Tdp size = 27 bits
Wdp size = 27 bits

Required memory for hashmap: 2*2**27 bits entries = 268 million entries * sizeof(jump entry)
Requires steps: 2*2**52 + numKang * 2**25
Probability(success) = 1 - (1 - 2**-52) ** (2**52) = 63%

I do not believe you understand how a hash map needs to work, in order to check for a collision.
In simplest terms, you will need some sort of a sorted list. Optimally, a balanced binary search tree. That one has O(log n) lookup and insert complexity. So, for a DP 25, that is filled with 2**28 entries, it requires 28 steps for a single lookup, IN AVEGAGE CASE scenario, when your search tree is balanced.

The memory requirement difference between #115 with DP 25 and #130 with DP 32 is orders of magnitudes higher!

It doesn't matter if you use a single CPU with few TB of RAM or many CPUs each with less RAM each. You will end up with the same problem of having to merge / query the DP "central table" / search tree / sorted list, call it whatever you want.

Good luck with your "files".
You are in this thread spouting non sense. That’s the issue. You have no clue as to how this program works.

You say solving 130 will require exabytes of RAM/storage, which is false. So I ask how many would it take for 115 with DP 25, how many exa or terabytes? You couldn’t answer in terms of bytes.

You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
jr. member
Activity: 40
Merit: 6
April 01, 2024, 09:27:02 AM
Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
What exactly is wrong with my calculations, to be precise?... You were specifically asking something about #130 with a specific DP. Now you want to know why #115 is a manageable effort? OK...

Keyspace: 114 bits.
T size: 57 bits
W size: 57 bits
DP = 25, so Tdp size = 32 bits
Wdp size = 32 bits

Required memory for hashmap: 2*2**32 bits entries = 8 billion entries * sizeof(jump entry)
Requires steps: 2*2**57 + numKang * 2**25
Probability(success) = 1 - (1 - 2**-57) ** (2**57) = 63%

I do not believe you understand how a hash map needs to work, in order to check for a collision.
In simplest terms, you will need some sort of a sorted list. Optimally, a balanced binary search tree. That one has O(log n) lookup and insert complexity. So, for a DP 25, that is filled with 2**33 entries, it requires 33 steps for a single lookup, IN AVEGAGE CASE scenario, when your search tree is balanced.

The memory requirement difference between #115 with DP 25 and #130 with DP 32 is orders of magnitudes higher!

It doesn't matter if you use a single CPU with few TB of RAM or many CPUs each with less RAM each. You will end up with the same problem of having to merge / query the DP "central table" / search tree / sorted list, call it whatever you want.

Good luck with your "files".
full member
Activity: 1050
Merit: 219
Shooters Shoot...
April 01, 2024, 06:32:15 AM
I don't understand so deeply how kangaroo works but @Etar has created one distributed kangaroo.
Here are the links:
- https://github.com/Etayson/Server-Client-apps-for-Etarkangaroo
- https://github.com/Etayson/Etarkangaroo

If someone has the resources needed to run a 130# puzzle operation we can try to gather some gpu power.

Regarding what @WanderingPhilospher is asking to @kTimesG we can try to run this tool for 115# and see how long it takes and how many resources do we need/use. 
If so just PM me here.


No need to test #115, I already know the answer, I just wanted kTimesG to continue to put his foot in his mouth lol.

He really has no clue what he is saying nor understands this program. This program being the program that this thread is about. I like to have good laughs and was looking forward to his follow up answer but maybe he now realizes he is wrong and won't comment again lol.

As far as Etar's program, it is good and works and works up to 192 bits. The only drawback is it is written in PureBasic and many people who use Linux, struggle trying to compile it.
newbie
Activity: 6
Merit: 0
April 01, 2024, 05:14:48 AM
I don't understand so deeply how kangaroo works but @Etar has created one distributed kangaroo.
Here are the links:
- https://github.com/Etayson/Server-Client-apps-for-Etarkangaroo
- https://github.com/Etayson/Etarkangaroo

If someone has the resources needed to run a 130# puzzle operation we can try to gather some gpu power.

Regarding what @WanderingPhilospher is asking to @kTimesG we can try to run this tool for 115# and see how long it takes and how many resources do we need/use. 
If so just PM me here.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
March 31, 2024, 01:56:55 PM
Can someone show at least 2 DPs? I'd like to know more about them and to know what makes them useful in finding a collision.

They can be in many forms, here is trailing, with 0s:
824EDCFE76F16D85BB77626BE00000000

Or could be leading, with 0s:
00000000824EDCFE76F16D85BB77626BE

Those are both, at least, DP 32.

With those DPs, you would also have a distance/private key.

So if you get a tame DP to match a wild DP, then you would have a collision and possible solution to whatever public key you are looking for.

Tame:
824EDCFE76F16D85BB77626BE00000000  1000
Wild:
824EDCFE76F16D85BB77626BE00000000  0001

Then subtract wild distance from tame distance and that would be the private key to the public key being searched for. Priv key = 999

This is an example, those do not match up in the real curve, I just used them for an easy explanation.
jr. member
Activity: 47
Merit: 2
March 31, 2024, 12:19:04 PM
Can someone show at least 2 DPs? I'd like to know more about them and to know what makes them useful in finding a collision.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
March 31, 2024, 12:02:38 AM
First thing, I would not be using RAM. All DP info stored in files.

You still think it would require exabytes? To store all of those DPs?
Glad to see some progress. Did you factor in the entire overhead of using a non-O(1) lookup and item storage for DP? Or are you talking in abstract?

You have only 2 options:
a. Insist on a constant time factor to solve the puzzle, with all the assumptions factored in. One of those requirements is a O(1) DP lookup / storage. You will need a few TB of random access memory at a physical level, not in "files".
b. Actually understand how real-life limitations kick in.

Since the DP points are impossible to predict or compute them in "ranges", actual overheads like disk page reads / writes kick in.
Reading to a SSD is thousands of times slower than reading from physical RAM.
Writing to files involves reading entire pages of data, combining it, and writing it back.

Since DP values are uniformly spread out across the entire range, so say goodbye to "fast" SSD speeds when reading and writing even a single DP entry, since it is not sequential, and you have almost 0% chance to even have two same DP points in a close range.

But sure, if you wait like 5 seconds to compute a DP, store it to a file, and using this as your base, sure, it's a 2**65.5 + DP overhead total time. But you'll end up at the end with a broken SSD with an exceeded write failures count.

Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
jr. member
Activity: 40
Merit: 6
March 30, 2024, 04:41:09 PM
First thing, I would not be using RAM. All DP info stored in files.

You still think it would require exabytes? To store all of those DPs?
Glad to see some progress. Did you factor in the entire overhead of using a non-O(1) lookup and item storage for DP? Or are you talking in abstract?

You have only 2 options:
a. Insist on a constant time factor to solve the puzzle, with all the assumptions factored in. One of those requirements is a O(1) DP lookup / storage. You will need a few TB of random access memory at a physical level, not in "files".
b. Actually understand how real-life limitations kick in.

Since the DP points are impossible to predict or compute them in "ranges", actual overheads like disk page reads / writes kick in.
Reading to a SSD is thousands of times slower than reading from physical RAM.
Writing to files involves reading entire pages of data, combining it, and writing it back.

Since DP values are uniformly spread out across the entire range, so say goodbye to "fast" SSD speeds when reading and writing even a single DP entry, since it is not sequential, and you have almost 0% chance to even have two same DP points in a close range.

But sure, if you wait like 5 seconds to compute a DP, store it to a file, and using this as your base, sure, it's a 2**65.5 + DP overhead total time. But you'll end up at the end with a broken SSD with an exceeded write failures count.
full member
Activity: 1050
Merit: 219
Shooters Shoot...
March 30, 2024, 11:36:38 AM
I had zero overflow during tests.

And for 130, I am using the average case scenario and numbers. No exabytes needed. And it’s obvious you don’t understand the difference between a kangaroo and a stored DP.

You do the math yourself, take a DP, we will say DP 32, and you tell me, in your expert opinion, how much storage space is needed, roughly, for solving 130.  

I would reference you to OPs GitHub to read on time/memory tradeoff but you’ve already stated you don’t agree with much of what he has said or programmed.

Anyway, let me know storage space required, avg run case, for 130, using DP32
I think you are forgetting something about real-life constraints. Let's go by your example.
I will use log2 (bits) as a base to simplify calculations.

Puzzle #130 - keyspace = N = 129 bits.

We have two sets: T (tame) and W (wild), each having sqrt(N) elements (64.5 bits).

At this point probability of a collision (T and W intersection is not empty) is:
P(success) = 1 - P(failure) = 1 - (1 - 2**(-64.5)) ** (2 ** 64.5) = 63%

Total operations so far: 2 * 2**64.5 = 2 ** 65.5

Adding DP = 32 into the mix. So we store on average every T point out of every other 2 ** 32 and every W point out of every 2 ** 32.

So size of T = 2 ** (64.5 - 32) = 2 ** 32.5
Size of W = 2 ** (64.5 - 32) = 2 ** 32.5

Remember we didn't change nothing about probability, so these numbers are still for a 63% success probability.

Now, since DP only reduces storage by a factor of 2**DP, then the number of operations until T and W collide increases by:
2 * 2**DP / 2 on average (operations between real collision and reaching the DP point, on average for T and W) = 2 ** DP

So total ops for a 63% chance of success = 2 ** 65.5 + 2 ** DP

Now, you may say: oh, so I should apologize because I stated we need much more storage. Well, let's go back to real-life:

- set of size T with DP 32 = 2**32.5 elements
- set of size W with DP 32 = 2**32.5 elements
- P(success) = 63%

Now, how many kangaroos do we use? The naive answer is, it doesn't matter, because we are counting OPERATIONS total.

But it does matter when having to think how to store the traveled distances.
Let's see what distance a single kangaroo would travel, on average.

Jump distances = [2**0, 2**1, 2**2, ... 2**128]
Avg(jump dist) = (2**129 - 1) / 129 which almost equals 2**(129 - 7) = 2 ** 122

Number of jumps performed by the kangaroo to fill the T or W set is NOT 2**32.5, but 2**64.5 (because we still jumped even if not reaching a DP)

So total traveled distance = 2**64.5 * avgJumpSize = 2 ** (122 + 64.5) = 186.5 bits = 24 bytes

So storing a jump requires:
- jump position / key, let's say 33 bytes (X + Y sign)
- distance traveled (24 bytes) + kang type

Just by doing some simple estimations this would require a lot of TB (terabytes) of RAM.
You will need to increase the number of kangaroos by a factor of 256 to get rid of one byte in the stored DP.
65536 kangaroos to get rid of two bytes. Etc...

So to conclude:

- you need 2**32.5 tames DP, each around 60+ bytes
- you need 2**32.5 wilds DP, each around 60+ bytes
- your chances after 2**65.5 operations are around 63%
- the more kangaroos you have, the more DP overhead increases: 2**32 * numKangaroos
- the kangaroo jumps and the lookup for stored jumps needs to be in the complexity range of O(1) - e.g. RAM, not some swap file

If you can prove me that you can fit in real-life the two T and W sets without having to rely on memory swap to a storage device, then yes, you were right.

So, it goes without saying that maybe a real-life approach would not even get anywhere near storing the DP points, in our lifetime. Simply due to resource constraints.

Why did I say we need exabytes of data? Well, sir, I will let this as an exercise for the reader.
A lot to unpack here.

First thing, I would not be using RAM. All DP info stored in files.

You still think it would require exabytes? To store all of those DPs?

Before more deep diving, riddle me this, based on your calculations for 130 bit range and DP 32 (which you say would require exabytes), how much would 115 bit range at DP 25 need, according to your calculations?
jr. member
Activity: 40
Merit: 6
March 30, 2024, 10:10:21 AM
I had zero overflow during tests.

And for 130, I am using the average case scenario and numbers. No exabytes needed. And it’s obvious you don’t understand the difference between a kangaroo and a stored DP.

You do the math yourself, take a DP, we will say DP 32, and you tell me, in your expert opinion, how much storage space is needed, roughly, for solving 130.  

I would reference you to OPs GitHub to read on time/memory tradeoff but you’ve already stated you don’t agree with much of what he has said or programmed.

Anyway, let me know storage space required, avg run case, for 130, using DP32
I think you are forgetting something about real-life constraints. Let's go by your example.
I will use log2 (bits) as a base to simplify calculations.

Puzzle #130 - keyspace = N = 129 bits.

We have two sets: T (tame) and W (wild), each having sqrt(N) elements (64.5 bits).

At this point probability of a collision (T and W intersection is not empty) is:
P(success) = 1 - P(failure) = 1 - (1 - 2**(-64.5)) ** (2 ** 64.5) = 63%

Total operations so far: 2 * 2**64.5 = 2 ** 65.5

Adding DP = 32 into the mix. So we store on average every T point out of every other 2 ** 32 and every W point out of every 2 ** 32.

So size of T = 2 ** (64.5 - 32) = 2 ** 32.5
Size of W = 2 ** (64.5 - 32) = 2 ** 32.5

Remember we didn't change nothing about probability, so these numbers are still for a 63% success probability.

Now, since DP only reduces storage by a factor of 2**DP, then the number of operations until T and W collide increases by:
2 * 2**DP / 2 on average (operations between real collision and reaching the DP point, on average for T and W) = 2 ** DP

So total ops for a 63% chance of success = 2 ** 65.5 + 2 ** DP

Now, you may say: oh, so I should apologize because I stated we need much more storage. Well, let's go back to real-life:

- set of size T with DP 32 = 2**32.5 elements
- set of size W with DP 32 = 2**32.5 elements
- P(success) = 63%

Now, how many kangaroos do we use? The naive answer is, it doesn't matter, because we are counting OPERATIONS total.

But it does matter when having to think how to store the traveled distances.
Let's see what distance a single kangaroo would travel, on average.

Jump distances = [2**0, 2**1, 2**2, ... 2**128]
Avg(jump dist) = (2**129 - 1) / 129 which almost equals 2**(129 - 7) = 2 ** 122

Number of jumps performed by the kangaroo to fill the T or W set is NOT 2**32.5, but 2**64.5 (because we still jumped even if not reaching a DP)

So total traveled distance = 2**64.5 * avgJumpSize = 2 ** (122 + 64.5) = 186.5 bits = 24 bytes

So storing a jump requires:
- jump position / key, let's say 33 bytes (X + Y sign)
- distance traveled (24 bytes) + kang type

Just by doing some simple estimations this would require a lot of TB (terabytes) of RAM.
You will need to increase the number of kangaroos by a factor of 256 to get rid of one byte in the stored DP.
65536 kangaroos to get rid of two bytes. Etc...

So to conclude:

- you need 2**32.5 tames DP, each around 60+ bytes
- you need 2**32.5 wilds DP, each around 60+ bytes
- your chances after 2**65.5 operations are around 63%
- the more kangaroos you have, the more DP overhead increases: 2**32 * numKangaroos
- the kangaroo jumps and the lookup for stored jumps needs to be in the complexity range of O(1) - e.g. RAM, not some swap file

If you can prove me that you can fit in real-life the two T and W sets without having to rely on memory swap to a storage device, then yes, you were right.

So, it goes without saying that maybe a real-life approach would not even get anywhere near storing the DP points, in our lifetime. Simply due to resource constraints.

Why did I say we need exabytes of data? Well, sir, I will let this as an exercise for the reader.
jr. member
Activity: 31
Merit: 52
March 30, 2024, 01:28:20 AM
Dear @Jean_Luc, What is your opinion about Stride in Kangaroo Algo, if probabilistically it can be shown to have lower total key space in that particular stride or stride range.

The jumptables are already defined and also the Paths of Kangaroos are deterministic based on X, so stride is possible in this algo or not?
jr. member
Activity: 9
Merit: 0
March 29, 2024, 03:45:58 PM
Dear @Baskentliia,

It may be because English is probably not your first language, but saying to Jean Luc "We expect from you" is pretty rude.
The "prize" for solving puzzle #130 is very high, so don't expect to be able to do it with someone else's tools/software.
If you want to solve one of the remaining puzzles, you have to team up with a lot of people (and share the prize money) or be smarter than everyone else, i.e. come up with a new method or code to do it much faster than is possible with the actual hardware and software.
In fact, showing how secure the bitcoin crypto is was the whole idea behind these puzzles.

And a 256 bit range is ridiculous at the moment, please do the maths and calculate how many resources you'll need to solve a 256 bit number! That's why the puzzles stop at 160 bit...

Sincerely and good luck puzzling!



Thank you buddy. But we are not software developers, we have difficulty understanding and running the programs we use, so it is not possible for us to make changes to these programs. We would like to thank the software developers who share their programs for free without expecting anything in return.
We run the program and wait for luck to strike us.
sr. member
Activity: 462
Merit: 696
March 29, 2024, 03:06:29 PM
It is possible to reduce a little bit the complexity of the classic kangaroo algorithm by spreading the starting kangaroo in a non uniform maner.
Roughly speaking, if the private key lies in certain areas it will be found faster. As a counterpart if it lies in certain other areas it will be found slower.
But in average, considering the whole range, the compexity is well reduced.
It you read and understand carefully the reference in the gitbub page, you will find the trick Wink

brand new
Activity: 0
Merit: 0
March 29, 2024, 06:56:53 AM
Dear @Baskentliia,

It may be because English is probably not your first language, but saying to Jean Luc "We expect from you" is pretty rude.
The "prize" for solving puzzle #130 is very high, so don't expect to be able to do it with someone else's tools/software.
If you want to solve one of the remaining puzzles, you have to team up with a lot of people (and share the prize money) or be smarter than everyone else, i.e. come up with a new method or code to do it much faster than is possible with the actual hardware and software.
In fact, showing how secure the bitcoin crypto is was the whole idea behind these puzzles.

And a 256 bit range is ridiculous at the moment, please do the maths and calculate how many resources you'll need to solve a 256 bit number! That's why the puzzles stop at 160 bit...

Sincerely and good luck puzzling!

jr. member
Activity: 9
Merit: 0
March 27, 2024, 04:08:17 PM
Are you going to modify the code to be able to search the 130 bit? There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

I don't think so, unless someone proove me he has the needed power and is ready to beat the world record  Wink


Please update your program and upgrade to 256 bit range as soon as possible.
Others have updated your program but it wasn't successful enough. We expect from you
newbie
Activity: 3
Merit: 0
March 27, 2024, 03:31:11 PM
Are you going to modify the code to be able to search the 130 bit? There is a new NVIDIA GPU that was announced, the GB200 which is exponentially more powerful than the current gpus.

I don't think so, unless someone proove me he has the needed power and is ready to beat the world record  Wink

Here's my current work file for 130, i got here in about 3 days
i need 25.55 DP
is what i have wrong? since you said the program can't solve it?
Kangaroo v2.2
Loading: save.work
Version   : 0
DP bits   : 40
Start     : 200000000000000000000000000000000
Stop      : 3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Key       : 03633CBE3EC02B9401C5EFFA144C5B4D22F87940259634858FC7E59B1C09937852
Count     : 0 2^-inf
Time      : 00s
DP Size   : 2.1/4.7MB
DP Count  : 4480 2^12.129
HT Max    : 2 [@ 001ECF]
HT Min    : 0 [@ 000000]
HT Avg    : 0.02
HT SDev   : 0.13
Kangaroos : 0 2^-inf
Pages:
Jump to: