Pages:
Author

Topic: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1) - page 11. (Read 2283 times)

copper member
Activity: 1330
Merit: 899
🖤😏
Told you not to share anything public, no actually working script. ☠

So it only takes 1 bit to store a key? I can't imagine what would that be like with 100TB space to store, and with enough fire power + bsgs or keyhunt it should be easy to solve large keys.

Can you also change n to secp256k1 n - leading f's? To check if you can figure something out of it. I'd say there will be no floats if you do that.
copper member
Activity: 188
Merit: 0
You already deleted the message, but I saw it in the mail.
Here are other options.
But I don't see any difference.

Code:
num = 1024
Pk: 0x49552b0d0
Pk: 0x497fddd50
Pk: 0x499a7d5d0
Pk: 0x4a000b210
Pk: 0x4a5f30078
Pk: 0x4ae3ea090
Pk: 0x49ad6b998
num = 4096
Pk: 0x4a090c6e0
Pk: 0x49a1b7a78
Pk: 0x49fd6a730
Pk: 0x4ab095b80
Pk: 0x4a6b69d50
Pk: 0x498660e38
Pk: 0x4a4416ed8
num = 8192
Pk: 0x4a2fc3bb8
Pk: 0x499b576b8
Pk: 0x4a125b9a0
Pk: 0x4a9f3a5b0
Pk: 0x4985b0678

Although knowing the first digit and 50% of the second, this greatly facilitates further work)
Thanks for the work done, I will continue testing.
copper member
Activity: 188
Merit: 0
By the way, for the 40-bit range there are no collisions at all.
What database size is required for higher ranges? How can one calculate the size for a specific range?
Is it possible to parallelize work across physical processor cores?
copper member
Activity: 188
Merit: 0
What is that? I don't understand, are they false positives?
If so, you must increase the collision margin from 64 to 128 or higher and problem solved.
It is still random+ scalar so you can increase the margin of precision thanks to scalar multiplication.

Code:
import secp256k1 as ice
print("Making Binary Data-Base")


target_public_key = "02f6a8148a62320e149cb15c544fe8a25ab483a0095d2280d03b8a00a7feada13d"

target = ice.pub2upub(target_public_key)

num = 50000000 # number of times.

sustract= 10 #amount to subtract each time.

sustract_pub= ice.scalar_multiplication(sustract)

res= ice.point_loop_subtraction(num, target, sustract_pub)

for t in range(num+1):

    h = (res[t*65:t*65+65]).hex()

    if h:  

        hc = int(h[2:], 16)

        if str(hc).endswith(('0','2','4','6','8')):

            A = "0"

        elif str(hc).endswith(('1','3','5','7','9')):

            A = "1"

        with open("dat-bin35.txt", "a") as data:

            data.write(A)

    else:

        break
    


    

Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray



print("Scanning Binary Sequence")


#range
start= 1
end=   34359738366
      

while True:

    pk= random.randint(start, end)

    target = ice.scalar_multiplication(pk)

    num = 32768 # number of times.

    sustract= 10 #amount to subtract each time.

    sustract_pub= ice.scalar_multiplication(sustract)

    res= ice.point_loop_subtraction(num, target, sustract_pub)
    
    binary = ''
    
    for t in range (num):
        
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
        
        
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
            
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
    
        
    my_str = binary

    b = bytes(BitArray(bin=my_str))

    file = open("data-base35.bin", "rb")

    dat = bytes(file.read())
    
    if b  in dat:
        with open (r"data-base35.bin", "rb") as file:
            s = b
            f = bytes(file.read())
            inx = f.find(s)
            Pk = (int(pk) + int(inx))+int(inx)*7
        
            data = open("win.txt","a")
            data.write("Pk:"+" "+hex(Pk)+"\n")
            data.close()
            pass
    
member
Activity: 234
Merit: 51
New ideas will be criticized and then admired.
generating sequences 01001....
The possibility of finding an identical sequence is very low.

The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.

Tested for 35 bit range, 50 million keys.
You write that the probability of false collisions is small, but I see something else.
Code:
Pk: 0x4a6af41c8
Pk: 0x4ab282db0
Pk: 0x4aa361888
Pk: 0x4995442d8
Pk: 0x4ac360410
Pk: 0x4aae069e0
Pk: 0x4990d2b70
Pk: 0x4a8293880
Pk: 0x49f888a98
Pk: 0x4a561b330
Pk: 0x4941be9a8
Pk: 0x49f676c08
Pk: 0x4ac3f4a18
Pk: 0x4abf25b48
Pk: 0x49cd58448
Pk: 0x4a3d172c0
Pk: 0x4a61e9c68
Pk: 0x497caefb0
Pk: 0x49ad43d20
Pk: 0x49b2be970
Pk: 0x4a2da0e08
Pk: 0x49daf14c8
Pk: 0x4a703a6d8
Pk: 0x4ac0ab410
Pk: 0x4ad48cd78
Pk: 0x49e381bf8
Pk: 0x4a8ce39b8
Pk: 0x4a2dbdcb0
Pk: 0x4a4b02ed0
Pk: 0x496075af8
Pk: 0x49df04bc8
Pk: 0x4968e4c58
Pk: 0x49b84c1d8
Pk: 0x4a7274c00
Pk: 0x4a103c310
Pk: 0x49dbc9168
Pk: 0x4980b4718
Pk: 0x4ac09a4a8
Pk: 0x496513eb8
Pk: 0x4a7fde400
Pk: 0x49e0a30d8
Pk: 0x4a0ca7000
Pk: 0x4a8ce5788
Pk: 0x4a29828a8
Pk: 0x4aa33ffe8
Pk: 0x49a7d8070
Pk: 0x4a7761668
Pk: 0x4ab04fb48
Pk: 0x49d3b4878
Pk: 0x49f91b288
Pk: 0x495b36470
Pk: 0x4a5b73060
Pk: 0x4a73533c8
Pk: 0x4a069d880
Pk: 0x495857b48
What is that? I don't understand, are they false positives?
If so, you must increase the collision margin from 64 to 128 or higher and problem solved.
It is still random+ scalar so you can increase the margin of precision thanks to scalar multiplication.

Edit:

Although I tested it with a collision margin of 64 and I did not get false positives. Maybe you are doing something wrong if you modified the code, share details and we will find a solution.
copper member
Activity: 188
Merit: 0
generating sequences 01001....
The possibility of finding an identical sequence is very low.

The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.

Tested for 35 bit range, 50 million keys.
You write that the probability of false collisions is small, but I see something else.
Code:
Pk: 0x4a6af41c8
Pk: 0x4ab282db0
Pk: 0x4aa361888
Pk: 0x4995442d8
Pk: 0x4ac360410
Pk: 0x4aae069e0
Pk: 0x4990d2b70
Pk: 0x4a8293880
Pk: 0x49f888a98
Pk: 0x4a561b330
Pk: 0x4941be9a8
Pk: 0x49f676c08
Pk: 0x4ac3f4a18
Pk: 0x4abf25b48
Pk: 0x49cd58448
Pk: 0x4a3d172c0
Pk: 0x4a61e9c68
Pk: 0x497caefb0
Pk: 0x49ad43d20
Pk: 0x49b2be970
Pk: 0x4a2da0e08
Pk: 0x49daf14c8
Pk: 0x4a703a6d8
Pk: 0x4ac0ab410
Pk: 0x4ad48cd78
Pk: 0x49e381bf8
Pk: 0x4a8ce39b8
Pk: 0x4a2dbdcb0
Pk: 0x4a4b02ed0
Pk: 0x496075af8
Pk: 0x49df04bc8
Pk: 0x4968e4c58
Pk: 0x49b84c1d8
Pk: 0x4a7274c00
Pk: 0x4a103c310
Pk: 0x49dbc9168
Pk: 0x4980b4718
Pk: 0x4ac09a4a8
Pk: 0x496513eb8
Pk: 0x4a7fde400
Pk: 0x49e0a30d8
Pk: 0x4a0ca7000
Pk: 0x4a8ce5788
Pk: 0x4a29828a8
Pk: 0x4aa33ffe8
Pk: 0x49a7d8070
Pk: 0x4a7761668
Pk: 0x4ab04fb48
Pk: 0x49d3b4878
Pk: 0x49f91b288
Pk: 0x495b36470
Pk: 0x4a5b73060
Pk: 0x4a73533c8
Pk: 0x4a069d880
Pk: 0x495857b48
member
Activity: 234
Merit: 51
New ideas will be criticized and then admired.
If you sort and store partial keys you can search by using binary search, cutting search time from linear to log. For larger number of keys, this becomes much faster than your single bit storage. To reduce space requirements, only store public keys that fit a criteria (like ending with the six lower bits zero) when building. When searching, subtract until you hit the bit criteria, then look up with binary search.

Also, the fact that the public keys you "store" have to be sequential makes these methods much less efficient than kangaro/BSGS etc.

You only store binary sequences in large spaces, if you want to spread your database over the entire range, and not in the same consecutive range.
Using jump addition and subtraction to your target.
full member
Activity: 161
Merit: 230
If you sort and store partial keys you can search by using binary search, cutting search time from linear to log. For larger number of keys, this becomes much faster than your single bit storage. To reduce space requirements, only store public keys that fit a criteria (like ending with the six lower bits zero) when building. When searching, subtract until you hit the bit criteria, then look up with binary search.

Also, the fact that the public keys you "store" have to be sequential makes these methods much less efficient than kangaro/BSGS etc.
member
Activity: 234
Merit: 51
New ideas will be criticized and then admired.
What's the advantage of this over storing full public keys at specific intervals?

Also as you target a larger number of keys, your time spent scanning for the bit sequence increases, unless you use some smarter data structures (which would not be as small as 3.81mb anymore)

generating sequences 01001....
The possibility of finding an identical sequence is very low.

The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.
full member
Activity: 161
Merit: 230
What's the advantage of this over storing full public keys at specific intervals?

Also as you target a larger number of keys, your time spent scanning for the bit sequence increases, unless you use some smarter data structures (which would not be as small as 3.81mb anymore)
member
Activity: 234
Merit: 51
New ideas will be criticized and then admired.
creating the lightweight database, billions keys.

last edit: 12/06/2023

1-we generate a bin file with the publickeys represented in 1s and 0s.
0=even,
1=odd.

single publickey

Code:
#@mcdouglasx
import secp256k1 as ice
from bitstring import BitArray
import bitcoin


print("Making Binary Data-Base")


target_public_key = "030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b"

target = ice.pub2upub(target_public_key)


num = 10000 # number of keys.

sustract= 1 #amount to subtract each time, If you modify this, use the same amount to scan.

Low_m= 10

lm= num // Low_m
print(lm)

sustract_pub= ice.scalar_multiplication(sustract)

res= ice.point_loop_subtraction(lm, target, sustract_pub)
binary = ''
for t in range (lm):

    h= (res[t*65:t*65+65]).hex()
    hc= int(h[2:], 16)
        
        
    if str(hc).endswith(('0','2','4','6','8')):
        A="0"
        binary+= ''.join(str(A))
            
    if str(hc).endswith(('1','3','5','7','9')):
        A="1"
        binary+= ''.join(str(A))
        

my_str = bytes(BitArray(bin=binary))

binary_file = open('data-base.bin', 'ab')
binary_file.write(my_str)
binary_file.close()

for i in range (1,Low_m):
    lm_upub= sustract_pub= ice.scalar_multiplication((lm*i)*sustract)

    A1= ice.point_subtraction(target, lm_upub)

    sustract_pub= ice.scalar_multiplication(sustract)

    res= ice.point_loop_subtraction(lm, A1, sustract_pub)
    
    binary = ''
    for t in range (lm):

        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
            
            
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
                
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
            

    my_str = bytes(BitArray(bin=binary))

    binary_file = open('data-base.bin', 'ab')
    binary_file.write(my_str)
    binary_file.close()


If you want to create a longer database than your memory supports
For example, 1000 million keys and your memory limit is 100 million, divide by 10 changing this variable:

Code:
Low_m= 10


- You should use numbers like this:
Code:
num = 10000000000000
Low_m= 10000

Code:
num = 20000000000000
Low_m= 2000

Avoid doing this or you will find private keys with the last numbers changed

Code:
num = 14678976447
Low_m= 23


we did it!

We have a huge database, with little disk space.

Because there is no sequence between even and odd pubkeys we can set a collision margin of 64, 128, 256....
By this I mean that as you increase the collision margin, the probability of finding an identical binary sequence decreases.


What's the use of this?

We just need to randomly generate binary sequences and check if they exist in the
Database.

searching single pubkey

Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray



print("Scanning Binary Sequence")

#Pk: 1033162084
#cPub: 030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b

#range
start= 1033100000
end=   1033200000
      

while True:

    pk= random.randint(start, end)
    
    target = ice.scalar_multiplication(pk)

    num = 64 # collision margin.

    sustract= 1 # #amount to subtract each time.

    sustract_pub= ice.scalar_multiplication(sustract)

    res= ice.point_loop_subtraction(num, target, sustract_pub)
    
    binary = ''
    
    for t in range (num):
        
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
        
        
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
            
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
    
        
    my_str = binary

    b = bytes(BitArray(bin=my_str))
    

    file = open("data-base.bin", "rb")

    dat = bytes(file.read())
    
    if b  in dat:
        s = b
        f = dat
        inx = f.find(s)*sustract
        inx_0=inx
        Pk = (int(pk) + int(inx_0))+int(inx_0)*7
        
        data = open("win.txt","a")
        data.write("Pk:"+" "+str(Pk)+"\n")
        data.close()
        break



for multiple pubkeys

- first we create the database by making random subtractions in a range specified by us, you can create your own list at your discretion

Code:
#@mcdouglasx
import secp256k1 as ice
import random

print("Making random sustract Data-Base")

target_public_key = "0209c58240e50e3ba3f833c82655e8725c037a2294e14cf5d73a5df8d56159de69"

target = ice.pub2upub(target_public_key)

targets_num= 1000

start= 0
end=   1000000

for i in range(targets_num):

    A0 = random.randint(start, end)
    A1 = ice.scalar_multiplication(A0)
    A2= ice.point_subtraction(target, A1).hex()
    A3 = ice.to_cpub(A2)
    data = open("rand_subtract.txt","a")
    data.write(str(A3)+" "+"#"+str(A0)+"\n")
    data.close()

Db multi-pubkeys

-we create our database for multiple keys.

Code:
#@mcdouglasx
import secp256k1 as ice
from bitstring import BitArray


print("Making Binary Data-Base")

target__multi_public_keys = "rand_subtract.txt"

with open(target__multi_public_keys, 'r') as f:

    lines= f.readlines()
    X = len(lines)
    
    for line in lines:
      
        mk= ice.pub2upub(str(line.strip()[:66]))

        num = 1024 # number of keys for pub.

        subtract= 1 #amount to subtract each time.

        subtract_pub= ice.scalar_multiplication(subtract)

        res= ice.point_loop_subtraction(num, mk, subtract_pub)
        
        binary = ''
        for t in range (num):

            h= (res[t*65:t*65+65]).hex()
            hc= int(h[2:], 16)
                
                
            if str(hc).endswith(('0','2','4','6','8')):
                A="0"
                binary+= ''.join(str(A))
                    
            if str(hc).endswith(('1','3','5','7','9')):
                A="1"
                binary+= ''.join(str(A))
                

        my_str = BitArray(bin=binary)

        binary_file = open('multi-data-base.bin', 'ab')
        my_str.tofile(binary_file)
        binary_file.close()

scan multiple publickeys, bit version

Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray



print("Scanning Binary Sequence")


#range
start= 3090000000        
end=   3093472814

#total number of pubkeys for targets in database.

X= 1024

while True:

    pk= random.randint(start, end)
    
    target = ice.scalar_multiplication(pk)

    num = 64 # collision margin.

    subtract= 1 #amount to subtract each time.

    subtract_pub= ice.scalar_multiplication(subtract)

    res= ice.point_loop_subtraction(num, target, subtract_pub)
    
    binary = ''
    
    for t in range (num):
        
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
        
        
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
            
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
    
        
    my_str = binary

    b = BitArray(bin=my_str)

    c = bytes(b)

    file = open("multi-data-base.bin", "rb")
    dat= BitArray(file.read())
    
    

    if b  in dat:
        
        s = c
        f = dat
        inx = f.find(s)
        inx_1=str(inx).replace(",", "")
        inx_0=str(inx_1).replace("(", "")
        inx_2=str(inx_0).replace(")", "")
        
        Pk = (int(pk) + ((int(inx_2) % X)*subtract))
        cpub=ice.to_cpub(ice.scalar_multiplication(Pk).hex())
        with open("rand_subtract.txt", 'r') as Mk:

            lines= Mk.readlines()
            for line in lines:
                
                mk_0= str(line.strip())
                mk= int(mk_0[68:])
                mk2= mk_0[ :66]
                if mk2 in cpub:
                    print("found")
                    cpub2=ice.to_cpub(ice.scalar_multiplication(Pk+mk).hex())
                    data = open("win.txt","a")
                    data.write("Pk:"+" "+str(Pk+mk)+"\n")
                    data.write("cpub:"+" "+str(cpub2)+"\n")
                    data.close()
                    break
        break

scan multiple publickeys, version bytes (faster).


Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray
import numpy as np


print("Scanning Binary Sequence")


#range
start=  3000000000
end=    3095000000



X= 1024 #number of sequential pubkeys for each target

while True:

    pk= random.randint(start, end)
   
    target = ice.scalar_multiplication(pk)

    num = 64 # collision margin.

    subtract= 1 #amount to subtract each time.

    subtract_pub= ice.scalar_multiplication(subtract)

    res= ice.point_loop_subtraction(num, target, subtract_pub)
   
    binary = ''
   
    for t in range (num):
       
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
       
       
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
           
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
   
       
    my_str = binary

    b = BitArray(bin=my_str)
   
    c = bytes(b)

    file = bytes(np.fromfile("multi-data-base.bin"))
   
    dat= (file)
   
   

    if c  in dat:
       
        s = c
        f = BitArray(dat)
        inx = f.find(s)
        inx_1=str(inx).replace(",", "")
        inx_0=str(inx_1).replace("(", "")
        inx_2=str(inx_0).replace(")", "")
       
        Pk = (int(pk) + ((int(inx_2) % X)*subtract))
        cpub=ice.to_cpub(ice.scalar_multiplication(Pk).hex())
        with open("rand_subtract.txt", 'r') as Mk:

            lines= Mk.readlines()
            for line in lines:
               
                mk_0= str(line.strip())
                mk= int(mk_0[68:])
                mk2= mk_0[ :66]
                if mk2 in cpub:
                    print("found")
                    cpub2=ice.to_cpub(ice.scalar_multiplication(Pk+mk).hex())
                    data = open("win.txt","a")
                    data.write("Pk:"+" "+str(Pk+mk)+"\n")
                    data.write("cpub:"+" "+str(cpub2)+"\n")
                    data.close()
                    break
        break


The difference between bit version and byte version is that with bytes some bits are masked within a byte, and it does not detect some collisions.
This does not happen in the bit version.
--------------------------------------------------------------------------


This code is just a demonstration that may not be optimized correctly, preferably make your own version in C.


Don't forget to say thank you, to motivate me to contribute other ideas.

Pages:
Jump to: