Pages:
Author

Topic: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1) - page 4. (Read 2625 times)

copper member
Activity: 205
Merit: 1

Code:
This is the public key: f5c03157f4a489ed71c0df0f37799431b3f432654a9bd6efdbb79ba767b62e81
I need to find this private key: 821480591424854

Private key found!!!
Collision!
258530638003543
604085cce892dbb8bf9e70959ef178bc78fb123caaa4e9b1f3521139542da57f

For example, I got a collision, how to calculate the private key?

The key 821480591424854 is in the range 2^49 - 2^50

Try with these parameters:

Code:
start_key = 1
num_public_keys = 2**50
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**24

Did you use bits_to_store = 64? And the other parameters?



Code:
start_key = 2**49
num_public_keys = 2**50
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20

In general, I already figured it out, the difference is 2**49).

By the way, how to calculate this parameter? Calculate the golden mean.
When you set it above 2**26, the search script eats up all the RAM.
If you decrease it, the size of the database increases.
rate_of_key_to_generate = 2**20
legendary
Activity: 1932
Merit: 2077
Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script
Now it should work:


It works for me up to 0x81 and after that it does not seem to find any keys.
I tried deleting and recreating the database from 0x80 but still nothing found.



0x80 = 128

What are you parameters?

Like these?
Code:
start_key = 1
num_public_keys = 2**32
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20
legendary
Activity: 1932
Merit: 2077

Code:
This is the public key: f5c03157f4a489ed71c0df0f37799431b3f432654a9bd6efdbb79ba767b62e81
I need to find this private key: 821480591424854

Private key found!!!
Collision!
258530638003543
604085cce892dbb8bf9e70959ef178bc78fb123caaa4e9b1f3521139542da57f

For example, I got a collision, how to calculate the private key?

The key 821480591424854 is in the range 2^49 - 2^50

Try with these parameters:

Code:
start_key = 1
num_public_keys = 2**50
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**24

Did you use bits_to_store = 64? And the other parameters?
copper member
Activity: 205
Merit: 1

The first version of the script worked, the second does not find matches.
There may be an error in the code.

On my PC, 100 keys found out of 100, ....  Huh

Did you set the same parameters in both scripts?

About an hour and a half ago, I copied both scripts, created a database and launched the search script. I didn't change anything in the parameters.

Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script

Now it should work:


To perform multiple search, for linux:

time for i in {1..100}; do python3 search_pk_arulbero.py; done



Code:
This is the public key: f5c03157f4a489ed71c0df0f37799431b3f432654a9bd6efdbb79ba767b62e81
I need to find this private key: 821480591424854

Private key found!!!
Collision!
258530638003543
604085cce892dbb8bf9e70959ef178bc78fb123caaa4e9b1f3521139542da57f

For example, I got a collision, how to calculate the private key?
member
Activity: 873
Merit: 22
$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk
There are many options when we work with bits:
could use this with bits

PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).

Repeat 1 million times.

We save a database that we will call "A"
where all pubs and random number are saved from the random subtractions.

We save a database that we will call "B"
where only the bits obtained are saved.

We scan

64 -bit collision margin

If there is match
We identify the pub found counting the bits in the DB using the % 1024  to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).

We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database.
With the difference that they would not be in sequences and could cover the target range randomly. .

Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts?
That would work as well.
Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file.

added binary version.

Thank you very much, Mister ;!)
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
There are many options when we work with bits:
could use this with bits

PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).

Repeat 1 million times.

We save a database that we will call "A"
where all pubs and random number are saved from the random subtractions.

We save a database that we will call "B"
where only the bits obtained are saved.

We scan

64 -bit collision margin

If there is match
We identify the pub found counting the bits in the DB using the % 1024  to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).

We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database.
With the difference that they would not be in sequences and could cover the target range randomly. .

Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts?
That would work as well.
Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file.

added binary version.
member
Activity: 122
Merit: 36
Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script
Now it should work:


It works for me up to 0x81 and after that it does not seem to find any keys.
I tried deleting and recreating the database from 0x80 but still nothing found.

full member
Activity: 1162
Merit: 237
Shooters Shoot...
There are many options when we work with bits:
could use this with bits

PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).

Repeat 1 million times.

We save a database that we will call "A"
where all pubs and random number are saved from the random subtractions.

We save a database that we will call "B"
where only the bits obtained are saved.

We scan

64 -bit collision margin

If there is match
We identify the pub found counting the bits in the DB using the % 1024  to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).

We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database.
With the difference that they would not be in sequences and could cover the target range randomly. .

Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts?
That would work as well.
Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file.
member
Activity: 239
Merit: 53
New ideas will be criticized and then admired.
There are many options when we work with bits:
could use this with bits

PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).

Repeat 1 million times.

We save a database that we will call "A"
where all pubs and random number are saved from the random subtractions.

We save a database that we will call "B"
where only the bits obtained are saved.

We scan

64 -bit collision margin

If there is match
We identify the pub found counting the bits in the DB using the % 1024  to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).

We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database.
With the difference that they would not be in sequences and could cover the target range randomly. .

Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
full member
Activity: 1162
Merit: 237
Shooters Shoot...
@WanderingPhilospher:


I think is the time to publish your modification script for creating database and for seeking in database.

I think -> to better optimised you need use mmap library

in 2 GB files I have up to 2 seconds for "finding the str"
Mine is only different from the OPs in that I don't use the BitArray, I stick to bytes.
Tradeoff, my DB is 4 times as large, but search is faster.

mmap increases search time?

With a 2GB DB file, I could find a key in less than a second (in the smaller ranges we've been discussing here)
I have a Db that is 488MB that contains 1,000,000,000 keys; that means I can check an entire 2^29.89 range in less than a second.
So at 2GB (roughly) 4 times that speed, so a 2^31.89 range checked in less than a second.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Then:

you have a DB full of DP 0 bits (no DP) wilds equally spaced

when you run tames, you want to check for collisions every step using the idea of this thread?
No, not necessarily. I have other ideas how to check for collisions that doesn't require a check every step; but it would work.

And if the wilds DP (DB bits > 0) are not equally spaced, how can you perform a check against the wild DB using 1 bit per key?

During creation of Dbs;
pubkey - create
pubkey - some random number - create
pubkey - some random number again - create
..........
pubkey - some random number again - create

While each create is evenly spaced, we have several creations (could be hundreds or thousands, it's unlimited)

I am trying to think of ways to lessen the space trade off of traditional kangaroo, that's all. I'm sure you have ideas.
legendary
Activity: 1932
Merit: 2077
Quote
You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced.

Lol, ok.

With Kangaroo, it does not matter if they are or are not equally spaced.

However, if you think the keys stored in a Kangaroo DB must be NOT equally spaced, then that is also achievable with this script.

You just mimic different starting keys, like Kangaroo does.

To me, I have a DB full of DP 0 bits, wilds.

Now I run only tames, and check for collisions. In my mind, it does not matter if the keys in the DB are equally spaced or not, but that is easily achieved by offsetting the target pubkey before running DB creation.

Then:

you have a DB full of DP 0 bits (no DP) wilds equally spaced

when you run tames, you want to check for collisions every step using the idea of this thread?

And if the wilds DP (DB bits > 0) are not equally spaced, how can you perform a check against the wild DB using 1 bit per key?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced.

Lol, ok.

With Kangaroo, it does not matter if they are or are not equally spaced.

However, if you think the keys stored in a Kangaroo DB must be NOT equally spaced, then that is also achievable with this script.

You just mimic different starting keys, like Kangaroo does.

To me, I have a DB full of DP 0 bits, wilds.

Now I run only tames, and check for collisions. In my mind, it does not matter if the keys in the DB are equally spaced or not, but that is easily achieved by offsetting the target pubkey before running DB creation.

legendary
Activity: 1932
Merit: 2077
Quote

But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database.
And the DP keys in DP database are not equally spaced.

I don’t understand.
The keys in DB are/can be equally spaced.
DP keys aren’t equally spaced because points with DPs are not equally spread.

Ok, we agree on these facts.


But you said:


For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB.

I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference?

You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote

But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database.
And the DP keys in DP database are not equally spaced.

I don’t understand.
The keys in DB are/can be equally spaced.
DP keys aren’t equally spaced because points with DPs are not equally spread.

Imagine running Kangaroo with DP 0 and space wasn’t an issue.
legendary
Activity: 1932
Merit: 2077
Quote
Yes, if you are trying to reduce the size of a DP database, the task is different.

You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys.

Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different.

Who said my keys or this script's keys are limited to a 2^34 space?

I can spread them out as little or as much as wanted.

Example:
I can generate 2^34 keys spread out every 1000000000 keys
or
I can mimic Kangaroo jumps; spread the keys out every (for #130) 130 / 2 + 1 = 66; so I could spread the keys out every 2^66 key.

For the DB that has 2^36 keys stored in it, the search script generates 64 subs and checks entire DB in 1 second. I'm not concerned about speed when a DB has over 68 Billion keys in it. It's a lot faster than the merging of files to check for a collision (JLPs Kangaroo version).


But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database.
And the DP keys in DP database are not equally spaced.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
Yes, if you are trying to reduce the size of a DP database, the task is different.

You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys.

Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different.

Who said my keys or this script's keys are limited to a 2^34 space?

I can spread them out as little or as much as wanted.

Example:
I can generate 2^34 keys spread out every 1000000000 keys
or
I can mimic Kangaroo jumps; spread the keys out every (for #130) 130 / 2 + 1 = 66; so I could spread the keys out every 2^66 key.

For the DB that has 2^36 keys stored in it, the search script generates 64 subs and checks entire DB in 1 second. I'm not concerned about speed when a DB has over 68 Billion keys in it. It's a lot faster than the merging of files to check for a collision (JLPs Kangaroo version).
legendary
Activity: 1932
Merit: 2077
Quote
1 bit for 2^32 keys is better than 64bits for 2^32/2^20 = 2^12 keys.

Less size, less search time, 100% keys found.


To speed up the serch, I need to store more information per byte, more density of information/work per byte stored.

For example, If I generated all 2^44 keys and I stored only the keys with the first 20 bit = 0, I would get a database with the same size of a databse created with 1 key each 1 million.

But the search time would be faster, cause I would need to check only 1 key against the database instead of 1 million.

I think you are looking at it from a small interval size. I am looking at it from a larger standpoint.

For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB.

I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference?

We are looking at using a smaller DB size in different ways.

If you ran a DP of 20 in a 2^44 range, you would wind up with roughly 2^24 keys stored.

As far as finding DPs, I have the fastest script, not on the market lol. GPU script. I can find all DPs (of any size), in a 2^44 range in a little less than 16 minutes.

Yes, if you are trying to reduce the size of a DP database, the task is different.

You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys.

Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different.
 
You don't need the private key, you need only to know if you have hit a DP already in database?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
1 bit for 2^32 keys is better than 64bits for 2^32/2^20 = 2^12 keys.

Less size, less search time, 100% keys found.


To speed up the serch, I need to store more information per byte, more density of information/work per byte stored.

For example, If I generated all 2^44 keys and I stored only the keys with the first 20 bit = 0, I would get a database with the same size of a databse created with 1 key each 1 million.

But the search time would be faster, cause I would need to check only 1 key against the database instead of 1 million.

I think you are looking at it from a small interval size. I am looking at it from a larger standpoint.

For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB.

I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference?

We are looking at using a smaller DB size in different ways.

If you ran a DP of 20 in a 2^44 range, you would wind up with roughly 2^24 keys stored.

As far as finding DPs, I have the fastest script, not on the market lol. GPU script. I can find all DPs (of any size), in a 2^44 range in a little less than 16 minutes.
legendary
Activity: 1932
Merit: 2077

The first version of the script worked, the second does not find matches.
There may be an error in the code.

On my PC, 100 keys found out of 100, ....  Huh

Did you set the same parameters in both scripts?

About an hour and a half ago, I copied both scripts, created a database and launched the search script. I didn't change anything in the parameters.

Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script

Now it should work:


To perform multiple search, for linux:

time for i in {1..100}; do python3 search_pk_arulbero.py; done
Pages:
Jump to: