https://github.com/Mcdouglas-X/Private-Key-Search-and-Ultra-Lightweight-Database-with-Public-KeysThis project implements a database designed to store interleaved bit patterns (010101...) of 15 bits or more in length.
These patterns (
Pp) are stored along with the number of public keys between patterns (
Bi) and the total
bits traversed to the end of each pattern (
Tb).
requirements:secp256k1
https://github.com/iceland2k14/secp256k1Database StructureThe database stores data in the following format:
Bi: 13806, Pp: 010101010101010, Tb: 13821
Bi: 10889, Pp: 101010101010101, Tb: 24725
Bi: 10637, Pp: 101010101010101, Tb: 35377
Bi: 186843, Pp: 010101010101010, Tb: 222235
This format allows the representation of thousands of public keys to be stored in just a few lines, making the database lightweight and easy to manage.
With my previous implementation 120 million keys fit into a 14 MB file and now the same keys are represented in a 165 KB file. Therefore, the file has been reduced by approximately 98.82%.
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import sys
target_public_key = "0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"
num_keys = 120000000
subtract = 1
low_m = num_keys // 1000000
lm = num_keys // low_m
db_name = "patt_db.txt"
patternx = re.compile(r'((10)+1|(01)+0)')
def process_res(res, lm, prev_bits=None):
binary = bitarray()
if prev_bits:
binary.extend(prev_bits)
for t in range(lm):
segment = res[t*65:t*65+65]
bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
binary.append(bit == '1')
return binary
def count_patterns(binary_bits, total_bits):
matches = patternx.finditer(binary_bits.to01())
last_end = 0
for match in matches:
pattern = match.group()
if len(pattern) >= 15:
bits_between = match.start() - last_end
total_bits += bits_between + len(pattern)
last_end = match.end()
with open(db_name, 'a') as f:
f.write(f"Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}\n")
remaining_bits = len(binary_bits) - last_end
next_prev_bits = binary_bits[-remaining_bits:]
return total_bits, next_prev_bits
print("Making DataBase")
target = ice.pub2upub(target_public_key)
subtract_pub = ice.scalar_multiplication(subtract)
prev_bits = None
total_bits = 0
for i in range(low_m):
sys.stdout.write(f"\rprogress: {i + 1}/{low_m}")
sys.stdout.flush()
lm_i = lm * i
lm_upub = ice.scalar_multiplication(lm_i)
A1 = ice.point_subtraction(target, lm_upub)
res = ice.point_loop_subtraction(lm, A1, subtract_pub)
binary_bits = process_res(res, lm, prev_bits)
total_bits, prev_bits = count_patterns(binary_bits, total_bits)
print("\nDone!")
Search FunctionalityTo find matches, the search script processes between 10,000 and 250,000 public keys per cycle (low_m). You can configure this value at your discretion;
100,000 is recommended, as it is the average number of keys between patterns.
For example, if there are 50,000 public keys between pattern A and pattern B, starting at an intermediate point between both patterns will easily lead you
to the next pattern, and the script will calculate your private key.
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import time
print("Searching Binary Patterns")
#Pk: 10056435896
#0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21
Target_pub ="0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"
#range
start = 10000000000
end = 12000000000
with open('patt_db.txt', 'r') as Db:
target = Db.readlines()
patternx = re.compile(r'((10)+1|(01)+0)')
def process_res(res, low_m):
binary = bitarray()
for t in range(low_m):
segment = res[t*65:t*65+65]
bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
binary.append(bit == '1')
return binary
def count_patterns(binary_bits, Rand, start_time):
matches = patternx.finditer(binary_bits.to01())
last_end = 0
last_match_info = None
for match in matches:
pattern = match.group()
if len(pattern) >= 15:
bits_between = match.start() - last_end
total_bits = match.start()
last_end = match.end()
X = f"Bi: {bits_between}, Pp: {pattern}"
for t in target:
if X in t:
Tb_in_t = int(t.split(", Tb: ")[1].split(",")[0])
pk = (Rand - total_bits + Tb_in_t)-len(pattern)
pk_f = ice.scalar_multiplication(pk).hex()
cpub = ice.to_cpub(pk_f)
if cpub in Target_pub:
last_match_info = f"Rand: {Rand} Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}, T: {t.strip()}, pk: {pk}"
if last_match_info:
pk_f = ice.scalar_multiplication(pk).hex()
cpub = ice.to_cpub(pk_f)
elapsed_time = time.time() - start_time
print("pk:", pk)
print("cpub:", cpub)
print("Elapsed time:", elapsed_time, "seconds")
with open('found.txt', 'a') as f:
f.write(f"pk: {pk}\n")
f.write(f"cpub: {cpub}\n")
f.write(f"Elapsed time: {elapsed_time} seconds\n")
exit()
low_m = 100000
sustract = 1
sustract_pub = ice.scalar_multiplication(sustract)
start_time = time.time()
while True:
Rand = random.randint(start, end)
pk = ice.scalar_multiplication(Rand)
res = ice.point_loop_subtraction(low_m, pk, sustract_pub)
binary_bits = process_res(res, low_m)
count_patterns(binary_bits, Rand, start_time)
PerformanceThe speed of the search depends on the size of your database. For instance, if you have a database of 100 million keys and your computer processes 1 million
keys per second, you would be processing around 1 billion keys per second.
Implementation NotesThis project is an implementation of an idea and is not optimized for speed or efficiency. You can create your own implementation in C.
You can reduce the minimum pattern length in database, which in turn could reduce low_m, resulting in faster search cycles and a larger database cost.
I have left the test database on github, if you just want to test the script.
At the moment, I don't have the computational resources to integrate into the world of GPUs and CPUs in C
.