Pages:
Author

Topic: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1) - page 6. (Read 2625 times)

member
Activity: 122
Merit: 36
Just decimal to hex  print output
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
I thought I would try them out with a couple of format tweaks..
Can't help without knowing the tweaks.
member
Activity: 122
Merit: 36
Very Interesting scripts. Thank you for publishing.
I thought I would try them out with a couple of format tweaks..

Using a key in the 30 bit range: 3d94cd64

The result,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the public key: 0d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b
I need to find this private key: 0x3d94cd64

Private key found!!!
0x594cd64
a152cbd0f4fdb5caf73c1ef6469896a760461bb0d297075962b40b7f5e93d2fd
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

0x5  94cd64 the last 6 are correct but the 5 of course is wrong, so produces the wrong public key.
Not sure why

I thought it might be a false positive, but ran through the whole database, but no other string was found.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py


parameters:

private key interval: from 1 to 2^32
keys stored: 2^32 / 2^7 = 2^25 (1 each 128)
bits for each key = 64
size of database: 257 MB

time to create the database: 3 min 55 s

time to find the private key of a single random public key in the db: 0.1 - 0.3 s




create_database_arulbero.py

Code:
#!/usr/bin/env python3
# 2023/Dec/03, create_database_arulbero.py
import secp256k1 as ice

#############################################################################
# Set the number of public keys to generate
#############################################################################
start_key = 1
num_public_keys = 2**32 #(about 4 billions of keys)
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 128
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key, start_key + num_public_keys, rate_of_key_to_store)
interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

########################################generation#############################################
public_keys=[]


public_keys_complete=list(map(ice.scalar_multiplication,interval_to_generate)) #generates the public keys complete
public_keys=list(map(lambda w: int(w[33-bytes_to_store:33].hex(),16),public_keys_complete)) #extract only the last bytes_to_store



########################################writing the db##########################################
for i in range(0,len(interval_to_store)):
f.write(public_keys[i].to_bytes(bytes_to_store,byteorder='big'))  #writes each key

f.close()

else:

f = open("public_keys_database.txt", "w")

#generation
public_keys=[]

for i in interval_to_generate:
P4 = ice.scalar_multiplication(i)
public_keys.append(P4[33-bytes_to_store:33].hex())

#writing the db
for i in range(0,len(interval_to_store)):
f.write(public_keys[i])

f.close()

search_key_arulbero.py
Code:
# 2023/Dec/03, search_key_arulbero.py
import secp256k1 as ice
import random
import sys

#############################################################################
# Set the number of public keys to generate
#############################################################################

start_key = 1
num_public_keys = 2**32
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 128
rate_of_key_to_store = rate_of_key_to_generate

split_database = 16 #read only a fraction of the database to speedup the finding of the string

interval_to_generate = range(start_key,start_key + num_public_keys, rate_of_key_to_store)

interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1



#########################################################################################

#pk = 0x3243 = 12867
#P = ice.scalar_multiplication(12867)
#P="0x6800b#b8a9dffe1709ceac95d7d06646188c2cb656c09cd2e717ec67487ce1be3"


#############generates random private key and public key#################################
pk= random.randint(start_key, start_key + num_public_keys)
P = ice.scalar_multiplication(pk)
print()
print("This is the public key: " + P[1:33].hex())
print("I need to find this private key: "+str(pk))


###################subtraction of : P - 1G,  P - 2G, ..., P - rate_of_key*G################
substract_pub = ice.scalar_multiplication(1)
complete_pub= ice.point_loop_subtraction(rate_of_key_to_generate, P, substract_pub)


partial_pub=[] #need only the last bytes_to_store
P2=int(P[33-bytes_to_store:33].hex(),16).to_bytes(bytes_to_store,byteorder='big')
partial_pub.append(P2)

for i in range(1,rate_of_key_to_store+1):
partial_pub.append(int(complete_pub[(i-1)*65+33-bytes_to_store:(i-1)*65+33].hex(),16).to_bytes(bytes_to_store,byteorder='big'))




################search in database##########################################################
with open("public_keys_database.bin", 'r+b') as f:

s = f.read()
l=len(s)

for k in range(0,l, l//split_database):
j=1
for i in partial_pub:
n = s[k:k+l//split_database].find(i)
if n > -1:
print()
print("Private key found!!!")
private_key = (n+k)//bytes_to_store*rate_of_key_to_generate + j
print(private_key)
P3 = ice.scalar_multiplication(private_key)
pub_key=P3[1:33].hex()
print((pub_key)[:])
f.close()
sys.exit(0)
j=j+1


print("string not found")


f.close()
 


Do you store fewer keys and more bits? When we use bytes we search in byte packets, so you are closer to traditional databases than to the idea presented here.
so you can take any pub, and store its first bytes without going through bit counting and it would give the same result because you have to find an explicit pk that meets certain parameters, which was the main problem.
legendary
Activity: 1932
Merit: 2077
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py

Both scripts appear identical or is it me?  Grin

You're right, I  copied twice the same file. Now there are 2 different scripts.
member
Activity: 122
Merit: 36
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py

Both scripts appear identical or is it me?  Grin
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
For take 2^30 key from 2^130 will be needs a 2^100 "operations", operations ad a subtract, operation as a subtract and divide , all ways will be need a 2^100

fir take key in 2^65 will bee need  a 2^65:for pub 2^230


Sad(((((
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Quote
You are right, no match.
It must be a problem with the reading of bytes.
You definitely found a bug.
Because, it is in the DB but does not get it.
I'll fix it.

Edit:
The curious part is that if we start at 6 we get 3092000006
and match.
Any update? I've been trying tweaks to the search script, but I have not been successful.


Sorry, I've been busy, basically the problem is when we look in bytes, if we stop using bytes we find it.


edit: for incremental
Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray



print("Scanning Binary Sequence")



start=0
end= 4000000000

#1:4000000000
for i in range(start, end,4000000):
   
    target = ice.scalar_multiplication(i)

    num = 64 # collision margin.

    sustract= 1 # #amount to subtract each time.

    sustract_pub= ice.scalar_multiplication(sustract)

    res= ice.point_loop_subtraction(num, target, sustract_pub)
       
    binary = ''
       
    for t in range (num):
           
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
           
           
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
               
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
       
           
    my_str = binary

    b = BitArray(bin=my_str)
    c = bytes(b)

    file = open("data-base.bin", "rb")
    dat= BitArray(file.read())
   

    if b  in dat:
       
        print("found")
        s = c
        f = dat
        inx = f.find(s)
        inx_1=str(inx).replace(",", "")
        inx_0=str(inx_1).replace("(", "")
        inx_2=str(inx_0).replace(")", "")
       
        Pk = (int(i) + int(inx_2))
           
        data = open("win.txt","a")
        data.write("Pk:"+" "+str(Pk)+"\n")
        data.close()
        break
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote

Activity: 1863
Merit: 2039


View Profile Personal Message (Online)

Ignore
   
   
Re: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1)
Today at 12:39:09 AM
   
Reply with quote  +Merit  #97
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py

parameters:

private key interval: from 1 to 2^32
keys stored: 2^32 / 2^7 = 2^25 (1 each 128)
bits for each key = 64
size of database: 257 MB

time to create the database: 3 min 55 s

time to find the private key of a single random public key in the db: 0.1 - 0.3 s

For a known public key; it could be unknown like your script (I would just need to generate the random pk/pub like you did, but the time results would be the same), but here are my script results for a key within the 2^32 bit range.

Generated 2,000,000 keys in less than 4 seconds.
Size of database = 977kb (less than 1MB)
time to find private key in the db: less than 2 seconds

Total time = less than 6 seconds.

I could adjust the number of keys generated, but there's no need in such a small range. With more keys generated, the search time would go down significantly.

I will run it with 2^25 keys and report back generation and search times.

UPDATE:
I generated 2^25 keys in 65 seconds.
Search took less than a second.
Total time = 66 seconds.
legendary
Activity: 1932
Merit: 2077
You say 4GB but you aren't comparing apples to apples.

In your scenario of only saving every 128 keys out of 2^36 keys, your DB does not contain 2^36, but 2^36 / 2^7 = 2^29 keys.

If I spread out my keys to every 128 inside a 2^36 range and only store 2^29 keys, my DB size would be about 63MB. Big difference, huge.

If the goal is finding the private key of a public key that is in a interval for which you have precomputed the public key, AND at the same time the goal is to keep the size of the db as small as possible, why don't you reduce your db size to 63 MB then?
legendary
Activity: 1932
Merit: 2077
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py


parameters:

private key interval: from 1 to 2^32
keys stored: 2^32 / 2^7 = 2^25 (1 each 128)
bits for each key = 64
size of database: 257 MB

time to create the database: 3 min 55 s

time to find the private key of a single random public key in the db: 0.1 - 0.3 s




create_database_arulbero.py

Code:
#!/usr/bin/env python3
# 2023/Dec/03, create_database_arulbero.py
import secp256k1 as ice
import sys

#############################################################################
# Set the number of public keys to generate
#############################################################################
start_key = 1
num_public_keys = 2**32 #(about 4 billions of keys)
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key, start_key + num_public_keys, rate_of_key_to_store)
interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

########################################generation#############################################
public_keys=[]


public_keys_complete=list(map(ice.scalar_multiplication,interval_to_generate)) #generates the public keys complete
public_keys=list(map(lambda w: int(w[33-bytes_to_store:33].hex(),16),public_keys_complete)) #extract only the last bytes_to_store



########################################writing the db##########################################
for i in range(0,len(interval_to_store)):
f.write(public_keys[i].to_bytes(bytes_to_store,sys.byteorder))  #writes each key

f.close()

else:

f = open("public_keys_database.txt", "w")

#generation
public_keys=[]

for i in interval_to_generate:
P4 = ice.scalar_multiplication(i)
public_keys.append(P4[33-bytes_to_store:33].hex())

#writing the db
for i in range(0,len(interval_to_store)):
f.write(public_keys[i])

f.close()

search_key_arulbero.py
Code:
# 2023/Dec/03, arulbero_search_key.py
import secp256k1 as ice
import random
import sys

#############################################################################
# Set the number of public keys to generate
#############################################################################

start_key = 1
num_public_keys = 2**32
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20
rate_of_key_to_store = rate_of_key_to_generate

split_database = 256 #read only a fraction of the database to speedup the finding of the string

interval_to_generate = range(start_key,start_key + num_public_keys, rate_of_key_to_store)

interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1

#########################################################################################

#pk = 0x3243 = 12867
#P = ice.scalar_multiplication(12867)
#P="0x6800b#b8a9dffe1709ceac95d7d06646188c2cb656c09cd2e717ec67487ce1be3"


#############generates random private key and public key#################################
pk= random.randint(start_key, start_key + num_public_keys)
#pk=start_key + num_public_keys-1
P = ice.scalar_multiplication(pk)
print()
print("This is the public key: " + P[1:33].hex())
print("I need to find this private key: "+str(pk))


###################subtraction of : P - 1G,  P - 2G, ..., P - rate_of_key*G################
substract_pub = ice.scalar_multiplication(1)
complete_pub= ice.point_loop_subtraction(rate_of_key_to_generate, P, substract_pub)


partial_pub=[] #need only the last bytes_to_store
P2=int(P[33-bytes_to_store:33].hex(),16).to_bytes(bytes_to_store,sys.byteorder)
partial_pub.append(P2)

for i in range(1,rate_of_key_to_store+1):
partial_pub.append(int(complete_pub[(i-1)*65+33-bytes_to_store:(i-1)*65+33].hex(),16).to_bytes(bytes_to_store,sys.byteorder))




################search in database##########################################################
num_bytes = num_public_keys*bytes_to_store//rate_of_key_to_store
size = num_bytes//split_database
s_partial_pub = set(partial_pub)


with open("public_keys_database.bin", 'r+b') as f:

#s=f.read()

for k in range(0, num_bytes, num_bytes//split_database):

f.seek(0,1)
partial_db = f.read(num_bytes//split_database)

l_partial_db = [partial_db[i:i + bytes_to_store] for i in range(0, size, bytes_to_store)]
s_partial_db = set(l_partial_db)

a = list(s_partial_db & s_partial_pub)
if (len(a)>0):

n = partial_db.find(a[0])

if n > -1:
print()
print("Private key found!!!")
private_key = (n+k)//bytes_to_store*rate_of_key_to_generate + partial_pub.index(a[0])+1
if(pk == private_key):
print("It is correct!!!")
else:
print("Collision!")
print(private_key)
P3 = ice.scalar_multiplication(private_key)
pub_key=P3[1:33].hex()
print((pub_key)[:])
f.close()
sys.exit(0)


print("string not found")
 
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)

You say 4GB but you aren't comparing apples to apples.

In your scenario of only saving every 128 keys out of 2^36 keys, your DB does not contain 2^36, but 2^36 / 2^7 = 2^29 keys.

If I spread out my keys to every 128 inside a 2^36 range and only store 2^29 keys, my DB size would be about 63MB. Big difference, huge.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.

I agree with you Alberto, you more experienced then ather im this talk
full member
Activity: 1162
Merit: 237
Shooters Shoot...
and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.
I don’t agree.

The DB, when checked, is checked against 64 or 128 or 256, or however high you want to go.

If you were checking the entire 2^256 keys, then yes, but when checking smaller range sizes (up to 160 bits) then I don’t think a larger DB creates more false positives.
legendary
Activity: 1932
Merit: 2077
and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.
legendary
Activity: 1932
Merit: 2077

What would be the false positives for the way you store keys? Mine is almost 0%. I've ran hundreds of tests now with zero false positives.

Also, why can't you store keys as a 0 or a 1? To make your DB even smaller?

My search script is like the OPs original but I added incremental.

In random, it lands on random point, does 64 comps, goes to next random point, 64 comps, etc.

Incremental is the same way except if you start at 0 and your stride/jump size is 2^20, then land on point 2^20, do 64 comps, jump to 2^20+2^20 point, do 64 comps, etc.

The script can also spread the keys out like you are saying. Save only every other 128 keys, the keys do not have to be sequential.


I store 64 bits for each key, you store 64 bits for 64 keys;  in my database there are less strings of 64 bits to compare with, then I think my probability is almost zero too.

For smaller databases (like 2^30 keys) I could use even less than 64 bits for keys (like 48 bits) without problems. If I implemented the "first x bits = 0" then the probability of collisions (at the same size of database) would be even lower (but it would take much longer to build the database).


I'm curious to your false positives and any search script you would create.
Me too.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Understood.

I have a database with 63,570,000,000 keys stored in it, with a size of only 7.7GB. This one was created using the BitArray function, but it has flaws when doing a search so I am not using it at the moment.

But when I run the search script, it uses the equivalent in RAM, roughly 7.7GB of RAM used during the search.

But I can make jumps the size of 63,570,000,000 (almost 2^36), do 64 computations, and know within a second if key is in that range or not. So every less than a second, it jumps 2^36 keys and checks for a match. But this is using python, could be much faster in C, C++, and a lot faster with GPU support.

Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)

What would be the false positives for the way you store keys? Mine is almost 0%. I've ran hundreds of tests now with zero false positives.

Also, why can't you store keys as a 0 or a 1? To make your DB even smaller?

My search script is like the OPs original but I added incremental.

In random, it lands on random point, does 64 comps, goes to next random point, 64 comps, etc.

Incremental is the same way except if you start at 0 and your stride/jump size is 2^20, then land on point 2^20, do 64 comps, jump to 2^20+2^20 point, do 64 comps, etc.

The script can also spread the keys out like you are saying. Save only every other 128 keys, the keys do not have to be sequential.

I'm curious to your false positives and any search script you would create.
copper member
Activity: 1330
Merit: 899
🖤😏
@arulbero, I get the error for int too big to convert, can your script work on mobile?
copper member
Activity: 205
Merit: 1
This script generates a database of 32 million keys (3.9 MB) in 3.5 seconds,

it stores only 1 key each 64 keys, and only 8 bytes for each key


Code:
#!/usr/bin/env python3
# 2023/Dec/03, arulbero_pub_key.py
import secp256k1 as ice

#############################################################################
# Set the number of public keys to generate and other parameters
#############################################################################
start_key = 1
num_public_keys = 32000000
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 64
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key,num_public_keys+1, rate_of_key_to_store)

interval_to_store = range(start_key,num_public_keys+1,rate_of_key_to_store)

binary_mode = 1

#private_keys = list(interval_to_generate)

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

###########generation#################
public_keys=[]

for i in interval_to_generate:                 #generate the other keys
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = int(public_keys[i],16)
f.write(h.to_bytes(bytes_to_store,byteorder='big'))

f.close()

else:

f = open("public_keys_database.txt", "w")

###########generation#################
public_keys=[]

for i in interval_to_generate:
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = public_keys[i]
f.write(h)
f.close()

If you want to read the data, switch to "binary_mode = 0".

With 2^32 keys (4 billions), it takes about 4 minutes to generates a db of 257 MB (1 key each 128 keys, 64 bits for key).

This is quite interesting, but how can we use it, for example, to find puzzle 65?
legendary
Activity: 1932
Merit: 2077
Understood.

I have a database with 63,570,000,000 keys stored in it, with a size of only 7.7GB. This one was created using the BitArray function, but it has flaws when doing a search so I am not using it at the moment.

But when I run the search script, it uses the equivalent in RAM, roughly 7.7GB of RAM used during the search.

But I can make jumps the size of 63,570,000,000 (almost 2^36), do 64 computations, and know within a second if key is in that range or not. So every less than a second, it jumps 2^36 keys and checks for a match. But this is using python, could be much faster in C, C++, and a lot faster with GPU support.

Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)
Pages:
Jump to: