Pages:
Author

Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it - page 30. (Read 245663 times)

newbie
Activity: 20
Merit: 0
Hello everyone, maybe someone will be interested:
if this puzzle is a deterministic wallet,

Hi,

As far as I have studied bitcoin, it's highly impossible for this addresses to came from one deterministic wallet. Please, correct me if I am wrong. Creating deterministic wallet You have no influence on final shape of priv keys so Yo can not made them,1bit, 2bit, 3bit etc.

BR
Damian

You can have a deterministic wallet and get the first 160 address. Get those private keys, mask them for you need (flip ones and zeros as you wish), and create addresses for these modified private keys.
newbie
Activity: 4
Merit: 0
Hello everyone, maybe someone will be interested:
if this puzzle is a deterministic wallet,

Hi,

As far as I have studied bitcoin, it's highly impossible for this addresses to came from one deterministic wallet. Please, correct me if I am wrong. Creating deterministic wallet You have no influence on final shape of priv keys so Yo can not made them,1bit, 2bit, 3bit etc.

BR
Damian
?
Activity: -
Merit: -
Hello everyone, maybe someone will be interested:
if this puzzle is a deterministic wallet, and if each subsequent private key becomes 2x harder, the bitcoin at that address also doubles,can it then its seed phrase is 1.2.4.8.16.32.64.128.256.512.1024.2048. be it might just be nonsense.
      
2^0 2^1 2^2 2^3 2^4 … 2^11
1.2.4.8.16.32.64.128.256.512.1024.2048.
abandon ability about abstract acid advance among avocado cable divide lend zoo
      
If it turns out as I said, I think that the person who finds the solution will give me a refund.
sorry for bad english

Hello everyone, maybe someone will be interested:
if this puzzle is a deterministic wallet, and if each subsequent private key becomes 2x harder, the bitcoin at that address also doubles,can it then its seed phrase is 1.2.4.8.16.32.64.128.256.512.1024.2048. be it might just be nonsense.
      
2^0 2^1 2^2 2^3 2^4 … 2^11
1.2.4.8.16.32.64.128.256.512.1024.2048.
abandon ability about abstract acid advance among avocado cable divide lend zoo
      
If it turns out as I said, I think that the person who finds the solution will give me a refund.
sorry for bad english
I know that the creator of the puzzle said that there is no pattern
What does the creator say to this?
can you comment?
?
Activity: -
Merit: -
Powers of two, and adjust the last element to the value needed to have the desired average.
This jump set was proven optimal here (e.g. minimizes total number of operations):
Kangaroos, Monopoly and Discrete Logarithms, J. M. Pollard, 2000

Well, ok. But I use fixed length table for all tests, it's more practical for implementation, also I get better results for longer list than using powers of two.
I will try your approach to see results.

Question for you: do you prefer fewer kangaroos that jump faster, or lots of kangaroos that jump slower?

I prefer faster kangs because of high DP bits that I have to use to solve high puzzles, to get smaller overhead. But even so, the number of kangs is crazy because there are many GPUs and every GPU has a lot of kangs anyway.

How can you explain case #3 (the awful case with runtime 172 sqrt)? When the Tame and Wild are separated by a distance of b/2, and the average jump size is much too small, it will take a lot of jumps for them to ever meet. In the random case, it's a little better than that, but still too far from the optimal case (e.g. a correct larger average jump size).

Main question here is how many times do you solve a point to calculate average result value? In my tests I solve at least 1000 times.
newbie
Activity: 38
Merit: 0
I might be thinking about this wrong but if you had a DB of DPs for say the 70 bit range then could use alberts ecctools to divide the #135 to that range and check for collisions, would that be faster then running bsgs over and over on that range for each pubkey.
?
Activity: -
Merit: -
The average jump distance is the one that needs to be multiplied by m, not every jump distances!
The average jump distance = sum(jump_distances) / len(jump_distances)
...
Tame = random between b/2 and b
Wild = random between 1 and b / 2
alpha = sqrt(b) : 21 jump points. Results:

If I need to increase average distance I generate larger jumps, but I don't change the number of jumps in the jump table.
But it seems you do, you added 8 more points to get x256 average distance somehow. So next my question is how do you generate the jump table?
sr. member
Activity: 652
Merit: 316
The optimal value of the jump range is when the kangaroos do not run further than 2 ranges during the solving:
Code:
expected_op = 2.076 * math.sqrt(range) + Nkangaroo
opimal_jmp_distance = range * 2 // expected_op

or simply math.sqrt(range)

But trick probably won't work for puzzles...
For example, you solved 120bit range and accumulated points. The optimal jump distance for this range is 2^59.95.
When you expand these points to the next range - 125bit, they will all be in the range of 2^125, but the average jump distance will be multiplied by 32 and will be 2^64.95.
At the same time, for 125 bits of range, the optimal jump distance is 2^62.44. This means that by the time you accumulate the required number of points for the solution, they will go 5.7 times further out of range. There is no problem that they out of range, kangaroos always run forward, but there will be no benefit from the points that remain in the range of 2^125
Most likely the trick works well when you have a huge base of accumulated points and you can use this to expand the range, knowing that the number of points is enough for the solution.
?
Activity: -
Merit: -
If we have some optimal average jump distance = m * sqrt(b) / 4 where m = number of kangaroos

Why do you think so? From my experience, optimal average jump distance does not depend on the number of kangs (at least if you have many of them), it's always about sqrt(range).
sr. member
Activity: 652
Merit: 316
So the trick is to spread out existing DPs into the higher interval, by changing the perspective on the generator..
But because jump rules need to be identical, this means the jumps are also multiplied as well, to make the same jumps, like this:
You are absolutely right, the jump table should be multiplied. I can't say anything about the optimal distance. I just wanted to try again if it works, because 4 years ago, I missed the point with the jump multiplier.
I'm not sure about the practical benefit of the method, but some boost of 12% can be obtained when moving from puzzle to puzzle.
?
Activity: -
Merit: -
Not only the next one.. 80 bit DPs extended to 90 bit range:
But of course there is a limitation here and it depends on the number of accumulated DPs.
Let's say, 95 bits I will not be able to solve with the number of DPs that I have.

Exactly. If you solved #80, DB with DPs for it will help only like 5-10% to solve #85. But if you have huge x10 DB for #80 you can solve 80-bit range very quickly and also solve #85 much faster, and even #90 will be solved a bit faster. But for high-number puzzles probably you don't have x10 DB.
sr. member
Activity: 652
Merit: 316
What trick is that?
https://bitcointalksearch.org/topic/m.54629413

Yes, old DPs can be reused if you keep old jump table, but they don't help much.
It's a good idea to use them for one next puzzle to improve chances a bit, but that's all.

Not only the next one.. 80 bit DPs extended to 90 bit range:
Code:
Start:0
Stop :3FFFFFFFFFFFFFFFFFFFFFF
Keys :1
KeyX :672DDE17A8F345C04D6C0B5C53750E107907313ECF5B3FFDB122868515ECD171
KeyY :B151B8521AA206F51FE685A231C5FAED10D903DA70F15EDDE3C09CB2AC41AA08
LoadWork: [HashTable 7449.0/9315.7MB] [35s]
Number of CPU thread: 0
NB_RUN: 64
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^90
Jump Avg distance: 2^40.03
Number of kangaroos: 2^20.46
Suggested DP: 24
Expected operations: 2^46.08
Expected RAM: 2725.0MB
DP size: 20 [0xFFFFF00000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [10.5s]
[711.66 MK/s][GPU 711.66 MK/s][Count 2^49.63][Dead 0][32:00 (Avg 1.2d)][7483.8/9361.2MB]  Cur td: 8F8EDB4B5B318F159493
Mult td: 23E3B6D2D6CC63C56524C00

Key# 0 [1S]Pub:  0x02672DDE17A8F345C04D6C0B5C53750E107907313ECF5B3FFDB122868515ECD171
       Priv: 0x387599B938E25FF1A07C51E

Done: Total time 32:40
But of course there is a limitation here and it depends on the number of accumulated DPs.
Let's say, 95 bits I will not be able to solve with the number of DPs that I have.
?
Activity: -
Merit: -
Yesterday I finally decided to get out of the dusty shelf the trick that @arulbero came up with 4 years ago. I decided to try to expand 80-bit points to the 85-bit range. The expected solution time is 5 hours 35 minutes for GTX 1660s. 5 hours passed and nothing... I thought that this trick really won't work. But today I changed the jump table (as you hinted) and here is the result: on the first attempt the key in the 85-bit range was found in 6 minutes, and the second in 14 minutes.

Yes, old DPs can be reused if you keep old jump table, but they don't help much.
It's a good idea to use them for one next puzzle to improve chances a bit, but that's all.
sr. member
Activity: 652
Merit: 316
-snip-
It was already discussed on why I do not believe DPs can be reused for larger intervals, they will only ever get hit if the deterministic jump rules stay the same. The proportion between "found DPs" and "total existing DPs" is extremely low, so the sets of found DPs will likely not intersect, when jump rules change.
Yesterday I finally decided to get out of the dusty shelf the trick that @arulbero came up with 4 years ago. I decided to try to expand 80-bit points to the 85-bit range. The expected solution time is 5 hours 35 minutes for GTX 1660s. 5 hours passed and nothing... I thought that this trick really won't work. But today I changed the jump table (as you hinted) and here is the result: on the first attempt the key in the 85-bit range was found in 6 minutes, and the second in 14 minutes.
Code:
KeyX :1080678DD05E815CEB501F2B839F1CBBEE619F9FB00455F7B2D78AC192A3583F
KeyY :D7B0FCB2F91693B51D6B1A7A51142A91AC338D114BD990A3B425D61850C7A15B
LoadWork: [HashTable 7449.0/9315.7MB] [36s]
Number of CPU thread: 0
NB_RUN: 64
GPU_GRP_SIZE: 128
NB_JUMP: 32
Range width: 2^85
Jump Avg distance: 2^40.03
Number of kangaroos: 2^20.46
Suggested DP: 22
Expected operations: 2^43.71
Expected RAM: 536.9MB
DP size: 20 [0xFFFFF00000000000]
GPU: GPU #0 NVIDIA GeForce GTX 1660 SUPER (22x64 cores) Grid(88x128) (141.0 MB used)
SolveKeyGPU Thread GPU#0: creating kangaroos...
SolveKeyGPU Thread GPU#0: 2^20.46 kangaroos [10.8s]
[697.63 MK/s][GPU 697.63 MK/s][Count 2^49.63][Dead 0][14:29 (Avg 05:44:40)][7464.5/9336.8MB]  Cur td: CAD97661AD8E4C0EBB8D
Mult td: 195B2ECC35B1C981D771A0

Key# 0 [1S]Pub:  0x031080678DD05E815CEB501F2B839F1CBBEE619F9FB00455F7B2D78AC192A3583F
       Priv: 0x15A640BAD94A621F9A02F0
member
Activity: 348
Merit: 34
Dear Friends
i backed to home after 6 days, on 1st nov, immdiatly i need to start traveling to 2000 miles away, for attend my close relative funereal,
during my 2 days travel to go i see messages at mobile phone, due to non available my desktop system , i cant participate
upon reach back at my system, i start writing this post with most easiest method about KtimeG Challenge Game, its total 3 step to find missing part, and 2 step back to privatekey

raw work
maxkey (hex)- minkey (hex), see how much 0 right side apear (90 in this puzle)
convert minkey into mod N, and then pubkey
address pubkey - minkey pubkey
result pubkey div 16 ( due to minkey ref to hex) as we have counted 90's 0 , 90 time loop check,
last we have 5aad97e6e197ddf3d014 pubkey

Gpu process pubkey for private key, found (5aad97e6e197ddf3d014)

now reverse
5aad97e6e197ddf3d014 x 16, 90 times to fille back 90's 0
then result add minkey hex
Private Key !!!!!!!!!!!!

Breakup Below


A  minkey
0x659756abf6c17ca70e0000000000000000000140be6ddd93e441f8d4b4a85653b20b4cdcc5c74 8207a0daa16191d07a425d8080c276f9412472e0429e61bc355
Dec: 378910740179897432693357321750481224493231606889794828290705977402364066269
Hex: d674b47af96587841f00471e5106277467a472a4fe97b8a5ce8f152e24f9dd
Pubkey: 021e07dada5c10fe81d5780bf3c1b772915dd6db98044bf77216bbc5dda283955b

B  pubkey
0x659756abf6c17ca70e5aad97e6e197ddf3d01540be6ddd93e441f8d4b4a85653b20b4cdcc5c74 8207a0daa16191d07a425d8080c276f9412472e0429e61bc355
Dec: 81441912728144611542033516536424891594486853195135222765184684141394073615958
Hex: b40e7d34265ab9533a64622bd1a188fb8abb8829af545169abad49b46be5fe56
Pubkey: 03a61fc84b6429f07fc0edf25265ef7a0ced3cd9a0edea85e9f58b50b5d73f66e7

C maxkey
Dec: 62120226893276139655396064408353610522500686847644094598223336080741851159646
Hex: 8956cd6cbf12c663969a46006e11acfebf411f2e930704ee38d4fdeac9bf5c5e

B-A
0x5aad97e6e197ddf3d014000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000
Dec : 81063001987964714109340159214674410369993621588245427936893978163991709549689
Hex : b338087fab6153cbb64561e4b35082d41653e3b70a55b9b105deba9f3dc10479
Pubkey: 03634641685eca3f8284bcd4ddf233dac92a551bb5ff74a0b3fd587d4da7c13eea



(B-A)/16 loop 90

5aad97e6e197ddf3d014
Pubkey: 03a1708bbb4e9b81a738eaca200d2b06a8f1d351aa3b07c8e255850500bef5ec2b


raw homework for b-a at level of pubkey
>>> 0x659756abf6c17ca70e5aad97e6e197ddf3d01540be6ddd93e441f8d4b4a85653b20b4cdcc5c74 8207a0daa16191d07a425d8080c276f9412472e0429e61bc355-0x659756abf6c17ca70e0000000000000000000140be6ddd93e441f8d4b4a85653b20b4cdcc5c748207a0daa16191d07a425d8080c276f9412472e0429e61bc355
1005681669055354146613222747685626267813986635935225180005598248700045377814988 002421710664308410663717703967808552865612110780956672
>>> hex(1005681669055354146613222747685626267813986635935225180005598248700045377814988 002421710664308410663717703967808552865612110780956672)
'0x5aad97e6e197ddf3d014000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000'
>>>


any Question Huh??
who buy me a Coffee



ktg.py
its for B-A pubkeys

Code:
from fastecdsa.curve import secp256k1
from fastecdsa.point import Point
from fastecdsa import keys, curve
import gmpy2
import random
import numpy as np
p1 = 115792089237316195423570985008687907852837564279074904382605163141518161494337
def c2ux(point):

 x_hex = point[2:66]

 return x_hex



def c2uy(point):

 p_hex = 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F'
 p = int(p_hex, 16)
 compressed_key_hex = point
 x_hex = compressed_key_hex[2:66]
 x = int(x_hex, 16)
 prefix = compressed_key_hex[0:2]

 y_square = (gmpy2.powmod(x, 3, p)  + 7) % p
 #y_square_square_root = gmpy2.powmod(y_square, (p+1)/4, p)
 y_square_square_root = gmpy2.powmod(y_square, (p+1) * gmpy2.powmod(4, p - 2, p) % p , p)
 if (prefix == "02" and y_square_square_root & 1) or (prefix == "03" and not y_square_square_root & 1):
     y = (-y_square_square_root) % p
 else:
     y = y_square_square_root

 computed_y_hex = format(y, '064x')


 return computed_y_hex


def cpub(x,y):
 prefix = '02' if y % 2 == 0 else '03'
 c = prefix+ hex(x)[2:].zfill(64)
 return c


p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F

def decompress_pubkey(pk):
    x = int.from_bytes(pk[1:33], byteorder='big')
    y_sq = (pow(x, 3, p) + 7) % p
    y = pow(y_sq, (p + 1) // 4, p)
    if y % 2 != pk[0] % 2:
        y = p - y
    y = y.to_bytes(32, byteorder='big')
    return b'\x04' + pk[1:33] + y

# G point
xsorg = 0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
ysorg = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8
Sorg = Point(xsorg, ysorg, curve=secp256k1)

# address pubkey point
line= '03a61fc84b6429f07fc0edf25265ef7a0ced3cd9a0edea85e9f58b50b5d73f66e7'    # main-pubkey
xs = int(c2ux(line),16)
ys = int(c2uy(line),16)
S = Point(xs, ys, curve=secp256k1)

# minkey Pubkey point
line1= '021e07dada5c10fe81d5780bf3c1b772915dd6db98044bf77216bbc5dda283955b' #   minkey-pubkey
xs1 = int(c2ux(line1),16)
ys1 = int(c2uy(line1),16)
S1 = Point(xs1, ys1, curve=secp256k1)

R1 = S-S1
xx=R1.x
yy=R1.y
R1pubkey=cpub(xx,yy)

print (R1pubkey)



ktg1.py
its for (B-A)Result pubkey and remove 90's 0,
thats all, you have missing part pubkey

Code:
from fastecdsa.curve import secp256k1
from fastecdsa.point import Point
from fastecdsa import keys, curve
import gmpy2
import random
import numpy as np
p1 = 115792089237316195423570985008687907852837564279074904382605163141518161494337
def c2ux(point):

 x_hex = point[2:66]

 return x_hex



def c2uy(point):

 p_hex = 'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F'
 p = int(p_hex, 16)
 compressed_key_hex = point
 x_hex = compressed_key_hex[2:66]
 x = int(x_hex, 16)
 prefix = compressed_key_hex[0:2]

 y_square = (gmpy2.powmod(x, 3, p)  + 7) % p
 #y_square_square_root = gmpy2.powmod(y_square, (p+1)/4, p)
 y_square_square_root = gmpy2.powmod(y_square, (p+1) * gmpy2.powmod(4, p - 2, p) % p , p)
 if (prefix == "02" and y_square_square_root & 1) or (prefix == "03" and not y_square_square_root & 1):
     y = (-y_square_square_root) % p
 else:
     y = y_square_square_root

 computed_y_hex = format(y, '064x')


 return computed_y_hex


def cpub(x,y):
 prefix = '02' if y % 2 == 0 else '03'
 c = prefix+ hex(x)[2:].zfill(64)
 return c


p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F

# G point
xsorg = 0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798
ysorg = 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8
Sorg = Point(xsorg, ysorg, curve=secp256k1)


# B-A pubkey to remove ending 0's
line= '03634641685eca3f8284bcd4ddf233dac92a551bb5ff74a0b3fd587d4da7c13eea'    # B-A-pubkey
xs = int(c2ux(line),16)
ys = int(c2uy(line),16)
S = Point(xs, ys, curve=secp256k1)

# we found 90's 0, fill in below line for div 16
for i in range(0,90):
  S = S*108555083659983933209597798445644913612035216511632722858692340445173276400941
  xx=S.x
  yy=S.y
  Spubkey=cpub(xx,yy)
print (Spubkey)

sr. member
Activity: 652
Merit: 316
Hi,
Thank You for posting link and description. Have You ever gentelmans thought about creating 110bit of DPs instead of 80bit like in recent challange? Is it even possible? How big shuld be work file for -m 3 in that case? It would take aprox 15 Years with 10k CUDA core card. Crazy idea, but thereafter ~33.000.000 possible ranges for #135 to check. Economicaly, there is no point in such a move, but I think that there are some romantic in here Wink In theory, how long does it take to check new public key in precompiled 110bit work file? @Etar, how long does it take You to find new public key with 80bit precompiled workfile?
BR
Damian
The key was found in 158 seconds, with 34 seconds spent loading the hashtable with DP=20
You don't need to divide the 135-bit range into 110-bit subranges. There's no point in doing so. You'll find the key faster in a 135-bit range than in 33m 110-bit ranges.
member
Activity: 165
Merit: 26
Hi,
Thank You for posting link and description. Have You ever gentelmans thought about creating 110bit of DPs instead of 80bit like in recent challange? Is it even possible? How big shuld be work file for -m 3 in that case? It would take aprox 15 Years with 10k CUDA core card. Crazy idea, but thereafter ~33.000.000 possible ranges for #135 to check. Economicaly, there is no point in such a move, but I think that there are some romantic in here Wink In theory, how long does it take to check new public key in precompiled 110bit work file? @Etar, how long does it take You to find new public key with 80bit precompiled workfile?

https://cr.yp.to/dlog/cuberoot-20120919.pdf

Once you read and understand what it is written in that paper you should then understand what is the purpose and usefulness of precomputed DPs. Basically, it's the result of storing the output of many multiple solves on top of each other to help with future solves, it is not a magical way for simplifying the first solve, which is actually slower than usual if the purpose is building the DP database. And splitting the range to solve smaller ranges just increases the overall complexity, it does not reduce anything, on the contrary.

So, you can't really precompute #135 DPs if you can't first solve #135 in reasonable time, and repeat this process a lot of times.
sr. member
Activity: 652
Merit: 316
@kTimesG, how many DPs did you manage to accumulate for the 80-bit range?
Have you tried expanding these points to the 110-130 bit range?
newbie
Activity: 4
Merit: 0

P.s. Here is a script that changes the public key and range in the working file.
Code:
import argparse
from math import log2
import os

HEADW = 0xFA6A8001  # work file
HEADERSIZE=156

def bytes_to_num(byte_data):   
    return int.from_bytes(byte_data, byteorder='little')
   

def checkhead(wf):
    head = bytes_to_num(wf.read(4))
    if head==HEADW:
        return head
    else:
        print('HEADER ERROR %08x %08x' % (head, HEADW))
        return False

def workinfo(workfile):   
    wf = open(workfile, 'rb')
    head = checkhead(wf)
    if not head:
        print('Invalid WorkFile Header')
        return False
    version = bytes_to_num(bytes(wf.read(4)))
    dp1 = bytes_to_num(bytes(wf.read(4)))
    RangeStart = bytes_to_num(bytes(wf.read(32)))
    RangeEnd = bytes_to_num(bytes(wf.read(32)))
    x = bytes_to_num(bytes(wf.read(32)))
    y = bytes_to_num(bytes(wf.read(32)))
    count = bytes_to_num(bytes(wf.read(8)))
    time = bytes_to_num(bytes(wf.read(8)))
    print(
        f'Header     : {head:08x}'
        f'\nVersion    : {version:d}'
        f'\nDP Bits    : {dp1}'
        f'\nStart      : {RangeStart:032x}'
        f'\nStop       : {RangeEnd:032x}'
        f'\nPubKey X   : {x:032x}'
        f'\nPubKey Y   : {y:032x}'
        f'\nCount      : 2^{log2(count):.3f}'       
        )     
    wf.close()
    return True
       
def getuncompressedpub(compressed_key):   
    p=2**256 - 2**32 - 977
    y_parity = int(compressed_key[:2],16) - 2
    if y_parity>1:     
      x = int(compressed_key[2:66], 16)
      y = int(compressed_key[66:130], 16)
      return (x,y)     
    x = int(compressed_key[2:], 16)
    a = (pow(x, 3, p) + 7) % p
    y = pow(a, (p+1)//4, p)   
    if y % 2 != y_parity:
        y = -y % p       
    return (x,y)
   
def MakeChanges(workfile, NewPubCompressed, NewRB, NewRE):
    if os.path.exists(f'{workfile}'):
        print('Old header:')
        if workinfo(workfile):       
            #make some changes
            (x,y)= getuncompressedpub(NewPubCompressed)   
            with open(workfile, 'r+b') as wf:           
                try:
                    wf.seek(12)
                    wf.write(NewRB.to_bytes(32, byteorder='little'))
                    wf.write(NewRE.to_bytes(32, byteorder='little'))
                    wf.write(x.to_bytes(32, byteorder='little'))
                    wf.write(y.to_bytes(32, byteorder='little')) 
                except Exception as e:
                    print(f'File error {e}')       
            print('New header:')
            workinfo(workfile)       
    else:
        print(f'File {workfile} is not exist')
        return


       
if __name__ == '__main__':   
    parser = argparse.ArgumentParser()
    parser.add_argument('-f', dest='workfile', type=str, required=True, help='WorkFile path')
    parser.add_argument('-pub', dest='pub', required=True, type=str, help='Compressed public key')
    parser.add_argument('-rb', dest='rb', required=True, type=str, help='Begin range')
    parser.add_argument('-re', dest='re', required=True, type=str, help='End range')
    args = parser.parse_args()

    if args.workfile:
        print(f'Workfile {args.workfile}')
        MakeChanges(args.workfile, args.pub, int(args.rb,16), int(args.re,16))

   

Hi,
Thank You for posting link and description. Have You ever gentelmans thought about creating 110bit of DPs instead of 80bit like in recent challange? Is it even possible? How big shuld be work file for -m 3 in that case? It would take aprox 15 Years with 10k CUDA core card. Crazy idea, but thereafter ~33.000.000 possible ranges for #135 to check. Economicaly, there is no point in such a move, but I think that there are some romantic in here Wink In theory, how long does it take to check new public key in precompiled 110bit work file? @Etar, how long does it take You to find new public key with 80bit precompiled workfile?

BR
Damian
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
priv = P_x / G_x


Quote
import hashlib
import ecdsa

# Бaзoвaя тoчкa G кpивoй SECP256K1
G_x = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
G_y = 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8

# Гeнepaция ceкpeтнoгo ключa k
k = 0x1234567890abcdef

# Гeнepaция тoчки P кpивoй SECP256K1
P_x = (k * G_x) % (2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1)
P_y = (k * G_y) % (2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1)

# Bычиcлeниe oбpaтнoгo чиcлa G_x пo мoдyлю 2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1
G_x_inv = pow(G_x, -1, 2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1)

# Пpoвepкa фopмyлы k = P_x / G_x
if k == (P_x * G_x_inv) % (2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1):
    print("Фopмyлa k = P_x / G_x вepнa")
else:
    print("Фopмyлa k = P_x / G_x нeвepнa")

# Bывoд тoчки P
print("Toчкa P:", (P_x, P_y))

# Bывoд ceкpeтнoгo ключa k
print("Ceкpeтный ключ k:", hex(k), hex((P_x * G_x_inv) % (2**256 - 2**32 - 2**9 - 2**8 - 2**7 - 2**6 - 2**4 - 1)))



member
Activity: 165
Merit: 26
Code:
class Challenge:
    def __init__(self, num_bits: int, raw_key: bytes = None):
        mask = (1 << num_bits) - 1

        if raw_key is None:
            raw_key = os.urandom(32)

        print(f'rawKey = {raw_key.hex():032}')
        self.private_key = int.from_bytes(raw_key)
        self.private_key += S.N * TrueRandom(2 ** 512 // S.N, 256, 1).get_next()
        self.public_key = Group.mul(Group.G, self.private_key)

        self.shift = TrueRandom(self.private_key.bit_length() - num_bits, 8).get_next()

        mask_2 = mask << self.shift
        self.min_key = self.private_key & ~mask_2
        self.max_key = self.private_key | mask_2
        print(
            f'minKey = {self.min_key:064x}'
            f'\nmaxKey = {self.max_key:064x}'
            f'\nprvKey = {self.private_key:064x}'
            f'\nkShift = {self.shift}'
            f'\nPubKey:'
            f'\n\tX: {self.public_key.x:032x}'
            f'\n\tY: {self.public_key.y:032x}'
        )


def create_challenge():
    input_key = getpass('Private Key:')

    if not len(input_key):
        raw_private_key = None
    else:
        raw_private_key = int(input_key, 16).to_bytes(32)

    num_bits = 80

    challenge = Challenge(num_bits, raw_private_key)
    addr = private_key_to_addr(int.to_bytes(challenge.private_key % S.N, 32))
    print(f'BTC Address(c): {addr}')

    return challenge


class Validator:
    def __init__(self, min_key: int, max_key: int, public_key: Point):
        self.min_key = min_key
        self.max_key = max_key
        self.public_key = public_key

        self.range_mask, self.shift = self.extract_mask()
        print(f'Computed shift: {self.shift}')

        # subtract min_key bits: P = P - minKey*G
        subtractor_key = Group.mul(Group.G, -self.min_key)
        # todo - handle case pub_key == -sub_key -> middleKey == 0
        self.subtracted_key = Group.add(self.public_key, subtractor_key)
        # print(f'Subtracted PubKey:'
        #       f'\n\tX: {hex(self.subtracted_key.x):032}'
        #       f'\n\tY: {hex(self.subtracted_key.y):032}'
        #       )

        # right-shift key: ("divide" by the nth power of two)
        self.shift_inv = pow(1 << self.shift, -1, S.N)
        self.translated_key = Group.mul(self.subtracted_key, self.shift_inv)

        # print(f'Translated PubKey:'
        #       f'\n\tX: {hex(self.translated_key.x):032}'
        #       f'\n\tY: {hex(self.translated_key.y):032}'
        #       )

    def extract_mask(self, slow_count: bool = False):
        range_mask = self.min_key ^ self.max_key

        if slow_count:
            shift = 0
            while range_mask % 2 == 0:
                shift += 1
                range_mask >>= 1
        else:
            # if any '1' gets shifted, a '0' is left over
            shift = range_mask.bit_length() - range_mask.bit_count()
            range_mask >>= shift

        if range_mask.bit_count() != range_mask.bit_length():
            raise ValueError('Invalid mask')

        return range_mask, shift

    def validate(self, tries: int = 1000000, sequential: bool = False):
        range_mask, shift = self.extract_mask()
        mask_len = range_mask.bit_length()
        print(f'Range size: {mask_len} bits; shift: {shift}')

        tr = TrueRandom(range_mask, mask_len, 1)

        for i in range(tries):
            key_idx = (i + 1) if sequential else tr.get_next()
            expected_key = self.min_key | (key_idx << shift)

            subtracted_key = expected_key - self.min_key

            # in the scalar field, shifting and "division" are equivalent
            # because we know the parity before each division (shift) step
            shifted_key = subtracted_key >> shift
            divided_key = S.mul(subtracted_key, self.shift_inv)
            if shifted_key != divided_key:
                raise Exception(f'Failed!\n\t{hex(divided_key)}\n\t{hex(shifted_key)}')

            pub_key = Group.mul(Group.G, expected_key)
            print(f'Checking privKey: {expected_key % S.N:064x}')
            priv = Validator(self.min_key, self.max_key, pub_key).validate_private_key(divided_key)
            if priv != expected_key:
                raise Exception('Validation failed')

    def validate_private_key(self, private_key: int):
        print(f'Checking translated private key: {hex(private_key)}')
        if private_key != 0:
            sk = private_key << self.shift
            sk = Group.mul(Group.G, sk)
            # print(f'Shifted PubKey:'
            #       f'\n\tX: {hex(sk.x):032}'
            #       f'\n\tY: {hex(sk.y):032}'
            #       )

        private_key = self.min_key | (private_key << self.shift)
        public_key = Group.mul(Group.G, private_key)

        if public_key.x == self.public_key.x and public_key.y == self.public_key.y:
            return private_key

        print('Key validation failed')


def main():
    challenge = create_challenge()

    validator = Validator(challenge.min_key, challenge.max_key, challenge.public_key)
    validator.validate(tries=10)

    translated_private_key = (challenge.private_key - challenge.min_key) >> challenge.shift
    private_key = validator.validate_private_key(translated_private_key)

    if private_key == challenge.private_key:
        print(f'Private Key OK: {private_key % S.N:032x}')
    else:
        raise Exception('Key validation failed')


if __name__ == '__main__':
    main()

Can anyone tell me if their is a difference (result wise) of the following two snippets:

Code:
def inverse(x, p):
  ...

Code:
def inverse(x, p):
    return pow(x, p - 2, p)

XGCD is faster then FLT when implemented at machine-level instructions, but "pow(x, -1, N)" is the fastest in Python because it uses native big integers internally. Unless going with gmpy2.invert()...
Pages:
Jump to: