Pages:
Author

Topic: BitCrack - A tool for brute-forcing private keys - page 8. (Read 76581 times)

full member
Activity: 1162
Merit: 237
Shooters Shoot...
I have been testing on my own modified versions of VS and KeyHunt.

Yeah, Bitcrack just froze with 10mil addresses loaded...

Is this your version?

https://github.com/WanderingPhilosopher/VanBitCrakcenS
Yes, that is an older version of VBC. However, I did not use it when testing address limits.
I have another VS version that will take binary to bloom.
Keyhunt, by albert, will also accept 200M keys, but it is CPU only but random function is great. You can also do sequential.
newbie
Activity: 33
Merit: 0
I have been testing on my own modified versions of VS and KeyHunt.

Yeah, Bitcrack just froze with 10mil addresses loaded...

Is this your version?

https://github.com/WanderingPhilosopher/VanBitCrakcenS
full member
Activity: 1162
Merit: 237
Shooters Shoot...
It eats up 4300MB, 4.3GB of RAM. Binary to Bloom; I am sure if it was loaded via text file, it would eat up a lot more. Also, I doubt this will work with Bitcrack or any VanitySearch forks.

Why wouldn't it work? Because those are set to use .txt files instead? What would you recommend using instead of Bitcrack &co?


I remember (long ago) trying to load like 32 million addresses in the original VS or a fork of it, and it would load them but then not run. I think the same was true for Bitcrack. I would say try them and see, maybe something changed.

I have been testing on my own modified versions of VS and KeyHunt.
newbie
Activity: 33
Merit: 0
It eats up 4300MB, 4.3GB of RAM. Binary to Bloom; I am sure if it was loaded via text file, it would eat up a lot more. Also, I doubt this will work with Bitcrack or any VanitySearch forks.

Why wouldn't it work? Because those are set to use .txt files instead? What would you recommend using instead of Bitcrack &co?

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
How does the GPU memory play a part?

If you want to load 1 billion addresses and work on them at the same time (in parallel), there isn't enough device memory to for everything on current GPUs according to my above calculations. So you'd have to break it into parts and place more copy calls which will affect performance.

Every time NVIDIA increases GPU memory this brings two kinds of speed benefits - one is you can process more stuff on the GPU at the same time, and it will be faster (as CUDA cores usually become faster at the same time) and the other is that as a direct consequence, there's less latency delay from excessive CopyHostToDevice calls and vice versa.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
200 Million addresses loaded and working with CPU; GPU untested
It eats up 4300MB, 4.3GB of RAM. Binary to Bloom; I am sure if it was loaded via text file, it would eat up a lot more. Also, I doubt this will work with Bitcrack or any VanitySearch forks.

Very impressive stats you found but I highly doubt you will get any higher than this if you test on GPU, this is because of its architecture where it's memory is separated from all the rest of the system RAM.

So you could have a 96GB system (eg. A very very recent MacBook Pro with an external nvidia GPU (does that even exist??) but the GPU will only have only 8 or 16 gigs total which puts a ceiling on the number of addresses. That's kinda sad as these suckers can easily do 500x the search performance of a single CPU socket.
How does the GPU memory play a part?
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
200 Million addresses loaded and working with CPU; GPU untested
It eats up 4300MB, 4.3GB of RAM. Binary to Bloom; I am sure if it was loaded via text file, it would eat up a lot more. Also, I doubt this will work with Bitcrack or any VanitySearch forks.

Very impressive stats you found but I highly doubt you will get any higher than this if you test on GPU, this is because of its architecture where it's memory is separated from all the rest of the system RAM.

So you could have a 96GB system (eg. A very very recent MacBook Pro with an external nvidia GPU (does that even exist??) but the GPU will only have only 8 or 16 gigs total which puts a ceiling on the number of addresses. That's kinda sad as these suckers can easily do 500x the search performance of a single CPU socket.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Bitcoin addresses are about 52 characters long so if you're dealing with 1 billion of them you are looking at 53-54 GB text file (including the new line, and if you're using Windows there's also a carriage return before the new line). As long as you are not opening that thing using Notepad and you got 64GB of RAM to spare, then you should not have any problems with memory.

Thanks, will give it a try in a few days.
My largest test to date:

Code:
Loading      : 100 %
 Loaded       : 100,000,001 Bitcoin addresses

Edit: It would work and run with CPU only, but not with GPU...
Edit 2: It works with 100 million addresses on CPU and GPU...

Edit 3:
Code:
Loading      : 100 %
Loaded       : 200,000,001 Bitcoin addresses
200 Million addresses loaded and working with CPU; Tested on GPU, loads and runs with 200 million addresses.
It eats up 4300MB, 4.3GB of RAM. Binary to Bloom; I am sure if it was loaded via text file, it would eat up a lot more. Also, I doubt this will work with Bitcrack or any VanitySearch forks.
newbie
Activity: 33
Merit: 0
Bitcoin addresses are about 52 characters long so if you're dealing with 1 billion of them you are looking at 53-54 GB text file (including the new line, and if you're using Windows there's also a carriage return before the new line). As long as you are not opening that thing using Notepad and you got 64GB of RAM to spare, then you should not have any problems with memory.

Thanks, will give it a try in a few days.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
I want to test maybe 10-15x more close to 1B addresses. Machine specs 50TB SSDs, 128 RAM. How should I proceed, just generate a large .txt file and see if it works? I doubt .txt files can handle that much data...
yes, small key sizes as in 20^35 - 2^40 keysize. How much of an impact has the size of the key? As in the smaller the key, the larger the amount of xpoints or addresses the tool can work with? Or even if the key is really small like 123*G it's still going to search forever within a large dataset of addresses...

Bitcoin addresses are about 52 characters long so if you're dealing with 1 billion of them you are looking at 53-54 GB text file (including the new line, and if you're using Windows there's also a carriage return before the new line). As long as you are not opening that thing using Notepad and you got 64GB of RAM to spare, then you should not have any problems with memory.
newbie
Activity: 33
Merit: 0

You would have to give me concrete examples.

work with = I've tested with 60 million xpoints, so I know it can do that many, how many are you wanting to run/test?
small key sizes = what do you mean? search in a 40 bit range, 48 bit range, etc. are you talking about ranges or small key sizes as in, private key sizes?

addresses/xpoints=addresses would be converted to hash160 which would be smaller than xpoints, so I would imagine, ran on the same systems, with same amount of RAM, you could run more addresses/hash160s versus xpoints.


I want to test maybe 10-15x more close to 1B addresses. Machine specs 50TB SSDs, 128 RAM. How should I proceed, just generate a large .txt file and see if it works? I doubt .txt files can handle that much data...
yes, small key sizes as in 20^35 - 2^40 keysize. How much of an impact has the size of the key? As in the smaller the key, the larger the amount of xpoints or addresses the tool can work with? Or even if the key is really small like 123*G it's still going to search forever within a large dataset of addresses...
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Understood...I posted a quite unrealistic scenario. Let's dial it down back to earth and try again.

1. How many x points could bitCrack / keyHunt, etc work with, saved in a dataset, to find small size keys within a reasonable amount of time (less than 24 hours). We can assume that the application is running on a high-end computer with a large amount of RAM and disk space. Does anyone have any experience with this? Would you be able to provide the results and the specifications of the machine used?

2. Same question for using addresses instead of points.

Thank you!!

You would have to give me concrete examples.

work with = I've tested with 60 million xpoints, so I know it can do that many, how many are you wanting to run/test?
small key sizes = what do you mean? search in a 40 bit range, 48 bit range, etc. are you talking about ranges or small key sizes as in, private key sizes?

addresses/xpoints=addresses would be converted to hash160 which would be smaller than xpoints, so I would imagine, ran on the same systems, with same amount of RAM, you could run more addresses/hash160s versus xpoints.
newbie
Activity: 33
Merit: 0
Understood...I posted a quite unrealistic scenario. Let's dial it down back to earth and try again.

1. How many x points could bitCrack / keyHunt, etc work with, saved in a dataset, to find small size keys within a reasonable amount of time (less than 24 hours). We can assume that the application is running on a high-end computer with a large amount of RAM and disk space. Does anyone have any experience with this? Would you be able to provide the results and the specifications of the machine used?

2. Same question for using addresses instead of points.

Thank you!!
copper member
Activity: 42
Merit: 67
But for the example above:  2^69 / 55,246,870 = 10,684,692,370,060; now take that and multiply by 1.72GB = 18,377,670,876,503GB, double all numbers for 2^70. That's a lot of GBs Smiley
Right, it's 18 ZB, roughly the same as 24 ZB (using 20 bytes per address, i.e. just HASH160, no prefix, checksum or private key).
In other words, about twice as much as all the world's storage capacity (HDD, flash, tape, optical) in 2023, and would cost something like $1000 billion1.


1 Using data from IDC and their "Worldwide Global StorageSphere" metric (not to be confused with the "DataSphere", which is the amount created and some 10x bigger).


full member
Activity: 1162
Merit: 237
Shooters Shoot...
It is a random dataset of around 2^70 addresses
Wait until April 1, then by all means
just add them line by line in a .txt file

Edit:

Because of
The files are not yet created due to the lack of knowledge on how to create a dataset that would work with tools such as bitCrack, etc...
...is this even possible?
I just realised that maybe you're actually serious, in which case I'm sorry to predict that the answer is no, most likely for several years.

You'll not even be able to "create the files":

 - Even if you somehow could create and store addresses as fast as the combined mining community can try hashes 2023 you'd have to wait 10000+ years.

 - Then there's the matter of storage. Luckily, Seagate expects to start selling 50TB hard drives in 2026, but you'd still need around 500 million of them.

Yeah, I think people underestimate how large 2^69 is...
For Robert_BIT
But to help further the example and to give you concrete evidence:
Code:
Loading      : 100 %
Loaded       : 55,246,870 Bitcoin xpoints
Those are xpoints which will be a little larger (file wise) than addresses. But for 55,246,870 xpoints, that file size in BINARY, is 1.72GB. In regular text format, the file size is 3.5GB.

Now, the other thing to consider if one is creating addresses, you will have to save, at the minimum, the private key and the address, or else one would not know how to map back to the address. You can create a starting point and do sequential keys which then you would only need to keep your starting point (private key).

But for the example above:  2^69 / 55,246,870 = 10,684,692,370,060; now take that and multiply by 1.72GB = 18,377,670,876,503GB, double all numbers for 2^70. That's a lot of GBs Smiley
You could probably trim some data and add in a hash table, maybe save some GB needed, but then you would need to create or mod an existing program to search via a hashtable. Doing that is doable, but then you would have to chunk the addresses in order to search for them. No way you could store 2^69 addresses in memory or hash table, and search for them.  The 55,246,870 eats up around 1200MB of RAM in my program.
copper member
Activity: 42
Merit: 67
It is a random dataset of around 2^70 addresses
Wait until April 1, then by all means
just add them line by line in a .txt file

Edit:

Because of
The files are not yet created due to the lack of knowledge on how to create a dataset that would work with tools such as bitCrack, etc...
...is this even possible?
I just realised that maybe you're actually serious, in which case I'm sorry to predict that the answer is no, most likely for several years.

You'll not even be able to "create the files":

 - Even if you somehow could create and store addresses as fast as the combined mining community can try hashes 2023 you'd have to wait 10000+ years.

 - Then there's the matter of storage. Luckily, Seagate expects to start selling 50TB hard drives in 2026, but you'd still need around 500 million of them.


newbie
Activity: 33
Merit: 0
Does anyone know how to best search for multiple addresses at once?
As is often the case, the problem as stated is massively underspecified. -----I know, sorry...-----


For instance:
- Are your target addresses are correlated, perhaps even strictly sequential (in which case the solution is trivial)? -----no link between the addresses -----
- How large is the dataset; -----2^69 - 2^70 addresses-----    is it feasible to transfer it to several kernels for parallellisation? -----no idea, perhaps.-----
- Is the dataset mutable, or is it feasible to perform an initial, heavy transformation (in which case perfect hashing might outperform probabilistic/Bloom)?  -----It is a random dataset of around 2^70 addresses specifically created so that only a few of them have small keys in range 2^40-2^41. The files are not yet created due to the lack of knowledge on how to create a dataset that would work with tools such as bitCrack, etc...But essentially the task at hand will be to find those small range keys while searching the huge dataset mentioned... -----

Et cetera. A general database engine is almost certainly not optimal in either case.----- rather than optimal my question hints at ...is this even possible? -----


copper member
Activity: 42
Merit: 67
Does anyone know how to best search for multiple addresses at once?
As is often the case, the problem as stated is massively underspecified.

For instance:
- Are your target addresses correlated, perhaps even strictly sequential (in which case the solution is trivial)?
- How large is the dataset; is it feasible to transfer it to several kernels for parallellisation?
- Is the dataset mutable, or is it feasible to perform an initial, heavy transformation (in which case perfect hashing might outperform probabilistic/Bloom)?

Et cetera. A general database engine is almost certainly not optimal in either case.

newbie
Activity: 33
Merit: 0
Hello,

Does anyone know how to best search for multiple addresses at once? Do we just add them line by line in a .txt file? What if we want to search for a large number of addresses, can we use a database / bloom of some sorts?

I am not sure of the limitations of searching a large number of addresses at once...I guess even for a small range like 2^40 it will make the search harder as you use more addresses. But haven't found any data on this on GitHub...

 

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Linux:
1) sudo su - superuser/admin rights
2) then run the commands.
You must also enter your system password before using the console and press Enter.

I don't know why cryptoxploit wrote that instruction there but you don't need to be root to run GPU bitcrack or any of the commands listed. It just so happens that GPU systems you can rent online are automatically provisioned with a root account so that is what most people end up using to run it.
Pages:
Jump to: