Author

Topic: Ultra-Lightweight Database with Public Keys (for puzzle btc) (Read 144 times)

member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Well, I do what I can with what I have at my disposal.

Then we are on the "search way", we do work what not get result asap, but , maybe  something was finded, for ex , my result 2^30 for 12191606 operations  but, this is mach slover then BSGS, probably  you find something similar too..... have a good day.

hero member
Activity: 862
Merit: 662
Well, I do what I can with what I have at my disposal.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
@albertobsd


how many pubkeys you archives


Quote
I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the

2^64 ? 2^64 is anoth for find 2^130 bit priv ?


i find answers you archive 2^43 pubkeys. this is not anith for get privkey, need brute  2^87 range. I thin you know this, yes ?
hero member
Activity: 862
Merit: 662
However, since I don’t think this PC will ever handle a 10MB database, I haven’t given it much thought.

I am storing up to 1 TB of binary data: 8.7 trillions of keys: 8796093022208.

This amout of bits grouped and stored in a bloom filter may need only 480 GB.

But stored in a Xor filter it will use only near 256 GB so i am trying everything to compact it more and try to use better methods to the search of collisions and search of the offset of the keys.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Have you considered using XOR filters?

I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the speed is boosted by this method i am going to change the bloom filter for a XOR filter becuae i believe that it may need only 16 bits per each item of 64 bits ( This mean that each group of 64 publickeys can be stored in a 16 bits)

The idea behind this database 1 item per bit might be some dubious because you need to some extra process to perform a search in the database, for example in my case, before this approach you only need to do a single subtraction (public key subtraction) and the compate it to the database, but now you need to do the original subtraction and then with that keys you need to contruct a item to compare in your database (bloom filter / short filter/ list / whatever) so you need to calculate other set of keys in order to be able to compare against your current database this mean thay you need more process. And not only 64 times more process you need more items to compare because you don't know if your target public keys is aligned with your current groups of keys.

So i am testing this just to validate what speed i am going to have, I am working a the very low bit level with bit shift operations and I think that my current approach is the most efficient from the CPU point ot view. I am not using any string conversion so i think there should be a point where this database will be useful
I hope for good results! In the case of this specific database, I would create a custom XOR filter. Since I only need the total bits (Tb) value of each line in the database, I would configure XOR to return the necessary value according to the line found. However, since I don’t think this PC will ever handle a 10MB database, I haven’t given it much thought.
hero member
Activity: 862
Merit: 662
Have you considered using XOR filters?

I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the speed is boosted by this method i am going to change the bloom filter for a XOR filter becuae i believe that it may need only 16 bits per each item of 64 bits ( This mean that each group of 64 publickeys can be stored in a 16 bits)

The idea behind this database 1 item per bit might be some dubious because you need to some extra process to perform a search in the database, for example in my case, before this approach you only need to do a single subtraction (public key subtraction) and the compate it to the database, but now you need to do the original subtraction and then with that keys you need to contruct a item to compare in your database (bloom filter / short filter/ list / whatever) so you need to calculate other set of keys in order to be able to compare against your current database this mean thay you need more process. And not only 64 times more process you need more items to compare because you don't know if your target public keys is aligned with your current groups of keys.

So i am testing this just to validate what speed i am going to have, I am working a the very low bit level with bit shift operations and I think that my current approach is the most efficient from the CPU point ot view. I am not using any string conversion so i think there should be a point where this database will be useful
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Bro, you talk about first bits of pubkeys in your patterns, or patterns in privkeys ?

It is a bit by pairs and odd public keys, that is, the patterns are pubkeys sequences, in the code this can be seen.
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx

The patterns I choose them like this to guarantee a balance between the weight of the DB and the search speed, but you can experiment with it. Whenever the same changes in both scripts are made, there will be no problems with PK calculations.

Here:pk = (Rand - total_bits + Tb_in_t)-len(pattern)
Then, a check is made, to avoid false positives.

Have you considered using XOR filters?

I really publish my ideas at your most basic point, so it is better understood.



Bro, you talk about first bits of pubkeys in your patterns, or patterns in privkeys ?
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx

The patterns I choose them like this to guarantee a balance between the weight of the DB and the search speed, but you can experiment with it. Whenever the same changes in both scripts are made, there will be no problems with PK calculations.

Here:pk = (Rand - total_bits + Tb_in_t)-len(pattern)
Then, a check is made, to avoid false positives.

Have you considered using XOR filters?

I really publish my ideas at your most basic point, so it is better understood.

newbie
Activity: 7
Merit: 0
Have you considered using XOR filters?
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
Are you sure it's a lightweight database? More like a Bloom filter, but secp256k1 is used instead of xxhash/murmurhash/etc.

yes, A Bloom filter is a probabilistic data structure used to test whether an element is present in a set. However, it does not store the elements themselves, but instead uses a series of hash functions to determine the presence of an element. Because of this, it is not possible to extract specific data, such as numbers, directly from a Bloom filter.
That's why I say this is a database, you can access and reuse the data.
newbie
Activity: 5
Merit: 1
Are you sure it's a lightweight database? More like a Bloom filter, but secp256k1 is used instead of xxhash/murmurhash/etc.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
https://github.com/Mcdouglas-X/Private-Key-Search-and-Ultra-Lightweight-Database-with-Public-Keys


This project implements a database designed to store interleaved bit patterns (010101...) of 15 bits or more in length.

These patterns (Pp) are stored along with the number of public keys between patterns (Bi) and the total

bits traversed to the end of each pattern (Tb).


requirements:

secp256k1

https://github.com/iceland2k14/secp256k1

Database Structure

The database stores data in the following format:

Code:
Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

This format allows the representation of thousands of public keys to be stored in just a few lines, making the database lightweight and easy to manage.

With my previous implementation 120 million keys fit into a 14 MB file and now the same keys are represented in a 165 KB file. Therefore, the file has been reduced by approximately 98.82%.

Code:
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import sys

target_public_key = "0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"

num_keys = 120000000
subtract = 1
low_m = num_keys // 1000000
lm = num_keys // low_m

db_name = "patt_db.txt"

patternx = re.compile(r'((10)+1|(01)+0)')

def process_res(res, lm, prev_bits=None):
    binary = bitarray()
    if prev_bits:
        binary.extend(prev_bits)
    
    for t in range(lm):
        segment = res[t*65:t*65+65]
        bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
        binary.append(bit == '1')
    
    return binary

def count_patterns(binary_bits, total_bits):
    matches = patternx.finditer(binary_bits.to01())
    last_end = 0
    for match in matches:
        pattern = match.group()
        if len(pattern) >= 15:
            bits_between = match.start() - last_end
            total_bits += bits_between + len(pattern)
            last_end = match.end()
            with open(db_name, 'a') as f:
                f.write(f"Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}\n")
    
    remaining_bits = len(binary_bits) - last_end
    next_prev_bits = binary_bits[-remaining_bits:]
    
    return total_bits, next_prev_bits

print("Making DataBase")

target = ice.pub2upub(target_public_key)
subtract_pub = ice.scalar_multiplication(subtract)
prev_bits = None
total_bits = 0  

for i in range(low_m):
    sys.stdout.write(f"\rprogress: {i + 1}/{low_m}")
    sys.stdout.flush()
    lm_i = lm * i
    lm_upub = ice.scalar_multiplication(lm_i)
    A1 = ice.point_subtraction(target, lm_upub)
    res = ice.point_loop_subtraction(lm, A1, subtract_pub)
    binary_bits = process_res(res, lm, prev_bits)
    total_bits, prev_bits = count_patterns(binary_bits, total_bits)

print("\nDone!")

Search Functionality

To find matches, the search script processes between 10,000 and 250,000 public keys per cycle (low_m). You can configure this value at your discretion;
100,000 is recommended, as it is the average number of keys between patterns.

For example, if there are 50,000 public keys between pattern A and pattern B, starting at an intermediate point between both patterns will easily lead you
to the next pattern, and the script will calculate your private key.

Code:
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import time

print("Searching Binary Patterns")

#Pk: 10056435896
#0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21

Target_pub ="0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"
#range
start = 10000000000                                                                                
end =  12000000000                                      


with open('patt_db.txt', 'r') as Db:
    target = Db.readlines()

patternx = re.compile(r'((10)+1|(01)+0)')

def process_res(res, low_m):
    binary = bitarray()
    
    for t in range(low_m):
        segment = res[t*65:t*65+65]
        bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
        binary.append(bit == '1')
    
    return binary

def count_patterns(binary_bits, Rand, start_time):
    matches = patternx.finditer(binary_bits.to01())
    last_end = 0
    last_match_info = None
    
    for match in matches:
        pattern = match.group()
        if len(pattern) >= 15:
            bits_between = match.start() - last_end
            total_bits = match.start()  
            
            last_end = match.end()
            
            X = f"Bi: {bits_between}, Pp: {pattern}"
            for t in target:
                if X in t:
                    Tb_in_t = int(t.split(", Tb: ")[1].split(",")[0])
                    pk = (Rand - total_bits + Tb_in_t)-len(pattern)
                    pk_f = ice.scalar_multiplication(pk).hex()
                    cpub = ice.to_cpub(pk_f)
                    if cpub in Target_pub:
                        last_match_info = f"Rand: {Rand} Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}, T: {t.strip()}, pk: {pk}"
                  
    
    if last_match_info:
        pk_f = ice.scalar_multiplication(pk).hex()
        cpub = ice.to_cpub(pk_f)
        elapsed_time = time.time() - start_time
        print("pk:", pk)
        print("cpub:", cpub)
        print("Elapsed time:", elapsed_time, "seconds")
        
        with open('found.txt', 'a') as f:
            f.write(f"pk: {pk}\n")
            f.write(f"cpub: {cpub}\n")
            f.write(f"Elapsed time: {elapsed_time} seconds\n")      
        exit()

low_m = 100000
sustract = 1
sustract_pub = ice.scalar_multiplication(sustract)        
start_time = time.time()

while True:
    
    Rand = random.randint(start, end)
    pk = ice.scalar_multiplication(Rand)
    res = ice.point_loop_subtraction(low_m, pk, sustract_pub)
    binary_bits = process_res(res, low_m)
    count_patterns(binary_bits, Rand, start_time)
    

Performance

The speed of the search depends on the size of your database. For instance, if you have a database of 100 million keys and your computer processes 1 million
keys per second, you would be processing around 1 billion keys per second.

Implementation Notes

This project is an implementation of an idea and is not optimized for speed or efficiency. You can create your own implementation in C.

You can reduce the minimum pattern length in database, which in turn could reduce low_m, resulting in faster search cycles and a larger database cost.

I have left the test database on github, if you just want to test the script.

At the moment, I don't have the computational resources to integrate into the world of GPUs and CPUs in C  Sad.
Jump to: