Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 68. (Read 245001 times)

member
Activity: 165
Merit: 26
My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalksearch.org/topic/lightweight-database-for-brute-force-using-publickeys-32mk-381mbsecp256k1-5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

I would prefer any day choosing to use an as fast as possible database that returns or writes results as immediately as possible, rather than a CPU-hungry tiny database. And all databases are binary... Sounds like you're just compressing some bit-map of ranges, what's the actual worst-case update/query/insert efficiency of your database? This is a problem that already has been solved in much faster and smarter ways, e.g. GZIP, LZMA, deflate, etc.
hero member
Activity: 630
Merit: 731
Bitcoin g33k
My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalksearch.org/topic/lightweight-database-for-brute-force-using-publickeys-32mk-381mbsecp256k1-5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

wish you best of luck and happy hunting Smiley
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalksearch.org/topic/lightweight-database-for-brute-force-using-publickeys-32mk-381mbsecp256k1-5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.
member
Activity: 503
Merit: 38
Too much math going on here.   Grin

jr. member
Activity: 42
Merit: 0
Can we Beat the Square Root Bound for ECDLP over Fp2 via Representations?

https://eprint.iacr.org/2019/800.pdf
full member
Activity: 1232
Merit: 242
Shooters Shoot...
Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....

He's not making a smaller N or a smaller range size, he is using curve symmetry. There is no "fooling" N.

If you are searching for a public key that is originally in a 60 bit range, manual but easy to explain way, take secp256k1 N and subtract 2^59 from it. This is your lower bound / start range. Your upper bound / end range will be 2^60-1. Same size range, but now it is using the curves symmetrical properties.
Now, you can start the 1 tame and 1 wild on the -negative side (lower bound side) and 1 tame and 1 wild on your positive side (upper bound side).
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....
Why are you spamming 3 posts in a row, IDK. And it's clear you didn't understand what I meant, you're also somehow mixing scalar indices with point coordinates and so on... your code also makes no sense.

My post was about reducing ECDLP complexity in an interval of size N from 2*sqrt(N) group operations to ~ 1.05 * sqrt(N) group operations by taking advantage of the group's fast inversion. I suggest maybe reading the exact definitions for every word in that sentence... and what "shifting a problem" between equivalent domains means in math / logic in general.

I was make only 1 answer to you.
member
Activity: 165
Merit: 26
Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....
Why are you spamming 3 posts in a row, IDK. And it's clear you didn't understand what I meant, you're also somehow mixing scalar indices with point coordinates and so on... your code also makes no sense.

My post was about reducing ECDLP complexity in an interval of size N from 2*sqrt(N) group operations to ~ 1.05 * sqrt(N) group operations by taking advantage of the group's fast inversion. I suggest maybe reading the exact definitions for every word in that sentence... and what "shifting a problem" between equivalent domains means in math / logic in general.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....



Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


What shift you mean ? Divide shift to right, multylpy to left, shift ","
... you talk about this shift ?


shift
:

10 / 5 = 2,0

10/50 = 0,20

10/500  = 0,020


....

no profit



Little offtop, but

x = 9901 +100 -1

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1


output:

9000 1
8000 2
7000 3
6000 4
5000 5
4000 6
3000 7
2000 8
1000 9
0 10
-1000 11
-2000 12
-3000 13
-4000 14
-5000 15
-6000 16
-7000 17
-8000 18
-9000 19
-10000 20

[Program finished]

if you run scrypt result will be 0, this can be pubkey of privkey 0 if replase nuber  x to pubkey

also in ooint 0, will be known what lubkey is divideble to  10 without flost part.

Needs a 2**65 substraction for find pubkey divisible to 2**65. Identify what pub 130 dividebla to 2**65 will be pubkey = 0.

so, again too many operations : 2**65 substraction and 2**65 searches 0 pubkey....


enother ex:


x = 9901

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1
    
    
    
    output:

8901 1
7901 2
6901 3
5901 4
4901 5
3901 6
2901 7
1901 8
901 9
-99 10
-1099 11
-2099 12
-3099 13
-4099 14
-5099 15
-6099 16
-7099 17
-8099 18
-9099 19
-10099 20

[Program finished]

lets sub any result from x:

sub -  -5099:


x = 9901 - -5099

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1
    
    
    
    result,:

14000 1
13000 2
12000 3
11000 4
10000 5
9000 6
8000 7
7000 8
6000 9
5000 10
4000 11
3000 12
2000 13
1000 14
0 15
-1000 16
-2000 17
-3000 18
-4000 19
-5000 20
-6000 21
-7000 22
-8000 23
-9000 24
-10000 25

[Program finished]

we had now point 0, and x is divisable to 15.

Can someone help and provide how to find clean x  "going this way" ?

member
Activity: 165
Merit: 26
Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Quote
Yes, all forward, same jump rules for all. But jump function is function of Y, so they don't jump forward with same size even if they have the same X. So the ones on the left are going towards the ones to the right, but the two don't go forward with equal jumps. This makes the covering more uniform.
Can you give an example of what you mean by the jump function is function of Y, with an example? Thanks.

Code:
jump_index = current_element.y % jump_table_length

Let's say we have two points with the same X, they are opposite points.

P = (xp, +yp)
-P = (xp, -yp)

If they use xp as jump function, they jump forward with the same distance, hence a less random walk. Since yp = ± sqrt(xp**3 + 7) jumping by Y spreads the randomness.

Another way to view this: in an [-N, N] interval we have N unique X values, and 2N unique Y values. Larger pool = better pseudo-randomness in an interval that's half of the length.

But even jumping by X alone has very good results, it was just that jumping by Y had even better ones.

Why using N  if range of privkey is anoth ?
member
Activity: 165
Merit: 26
Quote
Yes, all forward, same jump rules for all. But jump function is function of Y, so they don't jump forward with same size even if they have the same X. So the ones on the left are going towards the ones to the right, but the two don't go forward with equal jumps. This makes the covering more uniform.
Can you give an example of what you mean by the jump function is function of Y, with an example? Thanks.

Code:
jump_index = current_element.y % jump_table_length

Let's say we have two points with the same X, they are opposite points.

P = (xp, +yp)
-P = (xp, -yp)

If they use xp as jump function, they jump forward with the same distance, hence a less random walk. Since yp = ± sqrt(xp**3 + 7) jumping by Y spreads the randomness.

Another way to view this: in an [-N, N] interval we have N unique X values, and 2N unique Y values. Larger pool = better pseudo-randomness in an interval that's half of the length.

But even jumping by X alone has very good results, it was just that jumping by Y had even better ones.
full member
Activity: 1232
Merit: 242
Shooters Shoot...
Prefix: 13zb1hQb
Hash: 20d45a6a76

effort spent to achieve this didn't seem the same to me.

Prefix: 13zb
Suffix: h5so

Which target is closer to the hash? I'm genuinely asking to find out. The method seemed logical and useful to me, but I could be wrong. Please correct me if I'm mistaken.
It really does not matter what I tell you. People will argue for both sides so I would suggest you do your own key hunting and find out.

I will tell you this, you could have a hash that out of 40 characters, 1 is different.
An example:
20D45A6A762535700CE9E0B216E31994335DB8A5
20D45A6A762535700CE9E0B216E31994335DB8A6

and the private key of those two could be very far apart, one in a 66 bit range, one in a 255 bit range. Meaning the H160 doesn't signify where the private key is located (bit range), or having two almost identical H160s, does not mean they are close together, private key / bit range wise.

The original point I was making, is you can use a variant of vanitysearch, which can use GPUs, and do the same type of search, but process many more keys per second. I haven't used it in quite a while but I believe if you use this:

13zb*h5so; in vanitysearch
it's the same as
Prefix: 13zb
Suffix: h5so
in the program you are referencing.




Quote
Yes, all forward, same jump rules for all. But jump function is function of Y, so they don't jump forward with same size even if they have the same X. So the ones on the left are going towards the ones to the right, but the two don't go forward with equal jumps. This makes the covering more uniform.
Can you give an example of what you mean by the jump function is function of Y, with an example? Thanks.
newbie
Activity: 3
Merit: 0
Prefix: 13zb1hQb
Hash: 20d45a6a76

effort spent to achieve this didn't seem the same to me.

Prefix: 13zb
Suffix: h5so

Which target is closer to the hash? I'm genuinely asking to find out. The method seemed logical and useful to me, but I could be wrong. Please correct me if I'm mistaken.
full member
Activity: 1232
Merit: 242
Shooters Shoot...
In its current state, yes, it seems impractical. However, it can be made usable by customizing and narrowing the range. Compared to regular vanity, it can find RIPEMD-160s with the same Hamming distance using less energy. just my opinion:)
What does this mean?

You can use a variant of Vanity Search, to search in exact bit range or a specified range, with GPUs and/or CPUs, with prefix and suffix. How is this Rust, CPU only, program, going to be better or faster?
newbie
Activity: 3
Merit: 0
In its current state, yes, it seems impractical. However, it can be made usable by customizing and narrowing the range. Compared to regular vanity, it can find RIPEMD-160s with the same Hamming distance using less energy. just my opinion:)
hero member
Activity: 862
Merit: 662
https://github.com/nzengi/Dual-Vanity
someone found a bit easy way..

This look useless
member
Activity: 165
Merit: 26
So your 2 tames and 2 wilds are starting on the exact same points, mirrored of each other? Meaning same x, but different y. And then they both jump forward, positive jumps, of the same size?

Yes, all forward, same jump rules for all. But jump function is function of Y, so they don't jump forward with same size even if they have the same X. So the ones on the left are going towards the ones to the right, but the two don't go forward with equal jumps. This makes the covering more uniform.
full member
Activity: 1232
Merit: 242
Shooters Shoot...
I'd like to take credit for the below informal conjectures.

Solving ECDLP on an interval in 1.0 sqrt(b) group operations with 1.0 sqrt(b) stored items.

This beats BSGS by a factor of 2x since BSGS requires 2 sqrt(b) operations (worse, I think they are actually scalar multiplications). This also beats any known kangaroo or other algorithm when DP = 0.

Code:
          T1 ->            T2 ->
-N/2.    -4N/5     |       4N/5.   N/5

Shift interval to [-N/2, N/2) (there's a twist to this though, 0 is not needed**)
Start two tames at -4N/5 and 4N/5
Start two wilds at Q=kG and -Q

Jump alternatively T1, W1, T2 and W2, use Y as jump function. Before each jump, check if the min(y, p - y) coordinate is mapped to a previous visit (type, distance). If so:

if type = T1 and visit.type = T1: continue walk, kangaroo jumped from left to right through opposite points
if type = T1 or T2 and visit.type = W1 or W2: DLP solved
if type = W1 and visit.type = W1: DLP solved* (it sounds absurd, but we have (-k + d1) == -(-k + d2)
if type = W2 and visit.type = W2: DLP solved* since (k + d1) == -(k + d2)
if type = W1 and visit.type = W2: DLP solved*, since -k + d1 = k + d2
if type = W2 and visit.type = W1: DLP solved* since k + d1 = -k + d2
if dlp not solved, move one of the two kangaroos forward

*: only if deltas don't cancel each other out (rarely but can happen)
**: can get rid of 0 because 8045a5248dae8d1dea3c4eab7caac0b35142851038eea7d1f2556bf08e47ccf0

What happens?

Forward jumps by any kangaroo are equivalent to backward jumps of the same distance. Even if those points are not actually visited and stored.
T1 moving forward == T2 moving backward
W1 is moving forward while also mirroring a W2 walking in the opposite direction. Double win.

If W1 or W2 catch each other, or catch some arbitrary point AND its opposite after passing through the middle, DLP is solved. That is right, we can solve DLP using a single wild kangaroo which sounds insane but is true, when k < 0 or -k < 0 and the walk passes through any -P & P points.

When these details are factored into the usual analysis of catching kangaroos, the probabilities of collision are multiplied because now the chance of hitting a visited point is actually quadrupled or so at each jump of any kangaroo:
- points that are symmetric are "the same point" basically = miss chance is halved
- kangs go in a sort of "meet in the middle" walk = more "visited" spots per average jump size
- kangs also jump in their usual forward walk ofcourse in the same time - classic analysis

So now there are 4 walks instead of two, but because of the much higher density of "already visited" probabilities, number of group operations to solve ECDLP is on average < 1.03 sqrt(b) and number of scalar multiplications is basically zero (except initial setup) in 99.999% of the cases.

So now the big question: what happens if we want to decrease storage and use DP?

Well, I don't have any actual proof, but some parts of the above still hold, and depending on the ratio between the interval size and the chosen 2**DP there will still be an increased larger chance of collisions between DPs that are found on both halfs by misc. pairs of walks, and as such decreasing the total number of group operations that are expected, anywhere between 1.5 to 1.7 sqrt(b) - it is likely that as the interval increases, and the DP increases, complexity still stays around the same level or vey slightly increases.
So your 2 tames and 2 wilds are starting on the exact same points, mirrored of each other? Meaning same x, but different y. And then they both jump forward, positive jumps, of the same size?
member
Activity: 165
Merit: 26
I'd like to take credit for the below informal conjectures.

Solving ECDLP on an interval in 1.0 sqrt(b) group operations with 1.0 sqrt(b) stored items.

This beats BSGS by a factor of 2x since BSGS requires 2 sqrt(b) operations (worse, I think they are actually scalar multiplications). This also beats any known kangaroo or other algorithm when DP = 0.

Code:
          T1 ->            T2 ->
-N/2.    -4N/5     |       4N/5.   N/5

Shift interval to [-N/2, N/2) (there's a twist to this though, 0 is not needed**)
Start two tames at -4N/5 and 4N/5
Start two wilds at Q=kG and -Q

Jump alternatively T1, W1, T2 and W2, use Y as jump function. Before each jump, check if the min(y, p - y) coordinate is mapped to a previous visit (type, distance). If so:

if type = T1 and visit.type = T1: continue walk, kangaroo jumped from left to right through opposite points
if type = T1 or T2 and visit.type = W1 or W2: DLP solved
if type = W1 and visit.type = W1: DLP solved* (it sounds absurd, but we have (-k + d1) == -(-k + d2)
if type = W2 and visit.type = W2: DLP solved* since (k + d1) == -(k + d2)
if type = W1 and visit.type = W2: DLP solved*, since -k + d1 = k + d2
if type = W2 and visit.type = W1: DLP solved* since k + d1 = -k + d2
if dlp not solved, move one of the two kangaroos forward

*: only if deltas don't cancel each other out (rarely but can happen)
**: can get rid of 0 because 8045a5248dae8d1dea3c4eab7caac0b35142851038eea7d1f2556bf08e47ccf0

What happens?

Forward jumps by any kangaroo are equivalent to backward jumps of the same distance. Even if those points are not actually visited and stored.
T1 moving forward == T2 moving backward
W1 is moving forward while also mirroring a W2 walking in the opposite direction. Double win.

If W1 or W2 catch each other, or catch some arbitrary point AND its opposite after passing through the middle, DLP is solved. That is right, we can solve DLP using a single wild kangaroo which sounds insane but is true, when k < 0 or -k < 0 and the walk passes through any -P & P points.

When these details are factored into the usual analysis of catching kangaroos, the probabilities of collision are multiplied because now the chance of hitting a visited point is actually quadrupled or so at each jump of any kangaroo:
- points that are symmetric are "the same point" basically = miss chance is halved
- kangs go in a sort of "meet in the middle" walk = more "visited" spots per average jump size
- kangs also jump in their usual forward walk ofcourse in the same time - classic analysis

So now there are 4 walks instead of two, but because of the much higher density of "already visited" probabilities, number of group operations to solve ECDLP is on average < 1.03 sqrt(b) and number of scalar multiplications is basically zero (except initial setup) in 99.999% of the cases.

So now the big question: what happens if we want to decrease storage and use DP?

Well, I don't have any actual proof, but some parts of the above still hold, and depending on the ratio between the interval size and the chosen 2**DP there will still be an increased larger chance of collisions between DPs that are found on both halfs by misc. pairs of walks, and as such decreasing the total number of group operations that are expected, anywhere between 1.5 to 1.7 sqrt(b) - it is likely that as the interval increases, and the DP increases, complexity still stays around the same level or vey slightly increases.
Pages:
Jump to: