Pages:
Author

Topic: BitCrack - A tool for brute-forcing private keys - page 40. (Read 75623 times)

member
Activity: 406
Merit: 47

if anyone use python code for brute force recommend to use bitCrack version have RANDOM options (use -r )

bitCrack random version it is very fast more than python code

I try python is very slow,  I looking for python for GPU use (numba or other python use GPU but not found never have anyone code bitcoin generate address on GPU)
member
Activity: 272
Merit: 20
the right steps towerds the goal
So.. when your sp-mod #6 will arrive ?? hope very soon we will get rid of that message (Error: misaligned address).

I have given you a faster program. Your job is to opensource a fix, and then I might build another exe

It all goes over my head but i will try, It would be better if you make exe. Thanks Smiley
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
So.. when your sp-mod #6 will arrive ?? hope very soon we will get rid of that message (Error: misaligned address).

I have given you a faster program. Your job is to opensource a fix, and then I might build another exe
sp_
legendary
Activity: 2926
Merit: 1087
Team Black developer
I am making bitcrack for ethereum is this something the community want to support?
member
Activity: 272
Merit: 20
the right steps towerds the goal
Address:  1MVRjUctcb3sRTDcyDpn7fggdFkdMApvvZ     Dec:  165033644524273768307     Hex:  8f24d494f216d9773
AddreIs:  1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ

Address:  1MVSM7UVQsf5E7D1kGqR7g8JFp3ABrQvvZ     Dec:  240558293154646159895     Hex:  d0a6abd1778f77217
AddreIs:  1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ

Address:  16jcRHV7HxMB6NQWUX3U3Ef4VgTiZn2XQN     Dec:  85835904429154821725      Hex:  4a736644cda26e65d
AddreIs:  16jY7qLJnxb7CHZyqBP8qca9d51gAjyXQN

Address:  1MV4H2nVajL65MtmsMctQGVvvo7rLKAvvZ     Dec:  102938117123567770991     Hex:  5948da82265cfdd6f
AddreIs:  1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ

Address:  13zndWQ4ZcnZH9dBEHpDjUfN8m3nbvq5so     Dec:  169607331192411636050     Hex:  931c64919f1b9c152
AddreIs:  13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so

Address:  1BYQv4ScdgbeUhBGQUP75de5bjXeaFTdW9     Dec:  113451942910014718036     Hex:  6267644fb85530054
AddreIs:  1BY8GQbnueYofwSuFAT3USAhGjPrkxDdW9

Address:  13zfJmXxZJTCv8TpkiUgVfNWtL1iUEJ5so     Dec:  152866838231780983465     Hex:  849741b72d66882a9
AddreIs:  13zb1hQbWVsc2S7ZTZnP2G4undNNpdh5so

Address:  16jH5ADC2B8RzpGbtJzX3m2TbkVFZnjXQN     Dec:  74634585985448366257      Hex:  40bc35085d4ff50b1
AddreIs:  16jY7qLJnxb7CHZyqBP8qca9d51gAjyXQN

Address:  1BYAYH7WTUYoZLrBUTPtLqZqzvXYhZzdW9     Dec:  274348818725057147740     Hex:  edf5acd116f11335c
AddreIs:  1BY8GQbnueYofwSuFAT3USAhGjPrkxDdW9

Address:  1BYttrtQA3Zsu46STLa4CnqGSTE1QVddW9     Dec:  126433864629863105965     Hex:  6da9f52016ad0c5ad
AddreIs:  1BY8GQbnueYofwSuFAT3USAhGjPrkxDdW9

Address:  1MVBGdbXfqAuUGnSKma3usarNXgQw8jvvZ     Dec:  71847522591485806617      Hex:  3e515ad21e95d4819
AddreIs:  1MVDYgVaSN6iKKEsbzRUAYFrYJadLYZvvZ

Address:  16jnxmExCd3R2HMgq5iUe3BPTGHx8XzXQN     Dec:  50839530351450494269      Hex:  2c18a4b7ec9e5953d
AddreIs:  16jY7qLJnxb7CHZyqBP8qca9d51gAjyXQN

Address:  1BYECfoB9QktQPTooY2obZ45ZujTaLdW9      Dec:  94446020718149362758      Hex:  51eb3ab725a79e446
AddreIs:  1BY8GQbnueYofwSuFAT3USAhGjPrkxDdW9

Address:  16jFsFphfpCxnfZfBebUJonireUDdP2XQN     Dec:  224654018819515632568     Hex:  c2db367d92d264bb8
AddreIs:  16jY7qLJnxb7CHZyqBP8qca9d51gAjyXQN
newbie
Activity: 62
Merit: 0
Why don't you fork the existing repository? Then it will be easier to see what you have changed.

My repo is forked from https://github.com/BoGnY/BitCrack/

BoGnY's the one who added random mode and updated the CUDA VS props to 10.2. He also added the memory vectorization stuff (I think that's where my speed boost is coming from).

All the commits between 786ea361 and c3121cf6 are his changes https://github.com/ZenulAbidin/BitCrack-3000/compare/786ea361..c3121cf6 , everything after c3121cf6 are my changes.
Please upload compiled exe, I'm having problems with openCL, thx

Also, have you fixed the stride option?
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org

I hate making excuses but just when I was ready to work on VS project files for my fork today I learned that my Windows box is being serviced for the next few hours. So looks like a no-go for now. Resolved

Now I'm getting cuda.h not found after updating the props names in the vcxproj files... I need to put my custom include directory inside its search path somehow.



Wow! Amazing I like know more and I have equally successfully developed a tool to help you recover your btc funds be it lost or stolen in any way and also help you transform non spendable funds into Spendable and many other interesting features. Checkout https://www. no bestbitcoinprivatekeyrecovery backlinks .org

Your contains tidbits like this on the front page:

Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.

I'm trying to find relation between the bloom filter size and max number of addresses for a tolerable FP rate because being able to use more than 512MB memory would be handy.

Real problem here is setting up the bloom-filter, orginal model in brainflayer was 512MB, which only allows about 15M addresses, before false-positive goes astro. Solution is to cascade 4gb junks in series, as most shared memory models only allow 4gb chunks, 3 years ago using open-gl I had the bloom filters on the GPU ( gtx-1070 ), but you can only have 2gb chunks, and its actually faster to do blocks of 2048 private-keys, and then have each gpu core pass them back to threads all running cpu cores on the shared bloom-filter. With 300M valid btc  addresses, you really need a 32gb bloom filter. None of this stuff is online, you must roll your own.

Yeah, when you're working in parallel with multiple GPUs each device memory is separated from each other and so you have to split the bloom filter accordingly, so you don't offload that part of the code to CPU. And then compressing the R160 hashes down to binary dramatically speeds up the initialization time by avoiding reading several dozen GB and is what makes large bloom filters practical in the first place.

We can actually reserve most of the memory for the bloom filters, and the array of addresses themselves can be placed in host memory because finding a match doesn't happen very often, it'll only be used for output and logging anyway.
full member
Activity: 1148
Merit: 237
Shooters Shoot...
Quote
To answer your question my god, nobody is storing 300M addresses, learn what a bloom-filter is & does, its a way of compressing 300M address ( 300GB hex-text file ) into a 16GB binary file that can say/no in nano-seconds rather than hours

Didn't mean for you to go all political. Was asking a question...I didn't ask how they were stored, just if you were storing them, regardless of binary, compressed, etc. I don't need a bloom filter to search all addresses with a balance. A simple python script running random keys can check the 24 million addresses, in less than a nano-second, all at same time.  A simple python script stores the addresses and can check 1000s of keys per second and compare each key to the 24 million addresses, on a single CPU thread.  Bitcrack and Vanity can do the same, though their MKey/s rate drops; python's does not. So using a bloom, is it really "special"?  Especially for running/checking against ghost and dust addresses? Oh yeah, your commutational-fun.


And I believe there are quite a few more than 900 50+ BTC "virgin" addresses still out there.  But I wish you luck in your fun.
member
Activity: 182
Merit: 30
that is a goog question. is it possible to speed that up, comparing two list with millions of lines?

If you sort the file and use binary search to look for each item in your list then your runtime becomes O(log2(n)) for each  for each entry and so you're going to have a maximum of O(NumberOfAddresses*log2(n)) as your worst case runtime. It's really not slow, that's about 30 units of time to search for an address in a list with 1,000,000,000 lines in it.

Actually fitting all that into memory is going to be a problem though. There are some algorithms I read in a Knuth book about on-disk sorting but they're very old and I think they may have to be adapted for hard disks instead of tapes.

You can't search for a h160 address in a 300gb file, you must map the file to binary. Then the file will be 10gb, and it just takes a second to yes/no whether that h160 is in the list of 300M addresses.

Early brainflay github had a tool called 'binchk' using xed ( linux ) you convert the 300gb file to uniq-hex, and get the .bin file, use binchk

Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.

The bloom-filter is super fast, can work on GPU, but the false-positive is high.

No point in using binary search on text, just use the model described.

Today when I scrape the blockchain, I get about 300M addresses, but after you do the "sort -u", it will be slightly less, but you also need to run a client on the memory pool, to constantly add the new addresses to the bloom-filter.

If you comparing lists with 300M lines of hex, and/or search its much better to use bloom-filters and binary search combined, drops the 2+ hour search to seconds.
So you are storing and searching 300 Million addresses? How many of those have a balance? Are you just looking for a H160 collision?  Curious to know why you are doing what you are doing; doesn't seem like the typical trying to find balances.

I'm seeing 10,000MB-sec rate of addr-priv compare on a rack of 4 rtx-3070, check 300M all at once, means I'm running 30*10^20/sec addr-priv key compares,  birthday problems says 50% prob in a field of 2^128

You hit lots of addresses with value, problem is most addresses have 0.05 or less, and the number of addresses over 1 is 1,000's, and the number of addresses with say 0.5 or more is probably 10,000, the odds of finding 1/10,000 in 1/10^38 is rather nil

You can find dust, but I don't do this for money, I dont' even have an exchange account, I'm just doing this for commutational-fun

I think if you 'scale' that is, if somebody has a GPU farm, and they re-purpose dozens of GPU racks to this, they could probably find the majority of value quite quickly.

The problem is there are only a few thousand super high-value addressses left, 5 years ago there were 1,000's of satoshi virgin 50+ btc public-keys, today there less than 900 and dropping monthly, so somebody is trading out these addresses, and it ain't satoshi

...

To answer your question my god, nobody is storing 300M addresses, learn what a bloom-filter is & does, its a way of compressing 300M address ( 300GB hex-text file ) into a 16GB binary file that can say/no in nano-seconds rather than hours

It's really frustrating on this forum because most people don't bother to learn, and the moderators censor anything that rises the minions; The majority of legacy bitcoin-core is dedicated to status-quo, so sure they'll all just get old & die, thank god for young people. "They" don't want anybody to know the real, the entire BTC paradigm is lies, on lies, and anybody that say's the emperor has no clothes is silenced.  Too many vested interests, how is this any different than the people who run the Federal Reserve Bank???
member
Activity: 272
Merit: 20
the right steps towerds the goal
newbie
Activity: 5
Merit: 0
Wow! Amazing I like know more and I have equally successfully developed a tool to help you recover your btc funds be it lost or stolen in any way and also help you transform non spendable funds into Spendable and many other interesting features. Checkout https://www.bestbitcoinprivatekeyrecovery.org
newbie
Activity: 3
Merit: 0
Bitcrack worked some time with ./clBitCrack -c -u -i ./addresses -o ./found.txt --keyspace $somekeyspace:+$anotherkeyspace -b 48 -t 128 -p 400

Now I have in found.txt string with "15addr... PrivKey PubKey". 15addr... i found in addresses file. Private key I converted to base58check and got another one than i found in file found.txt. It's 1PKSP... or 1JREjy... Tried both compressed and uncompressed. PubKey in found.txt looks very strange 020000000000000000000000000000000000000000000000000000000000000000.

Why I have in found.txt file Private key for another address? o_O Why pubkey so strange?

clbitcrack, amd card.

./clBitCrack -i testkeys --keyspace mykeyspace:keyspace+somekeyspace --compression both  (testkeys is only 1JREjyd.... address)
found same PrivKey as in found.txt file, but another PubKey

Somebody got same strange results?

upd:
I think maybe it is a bloom filter collision...
full member
Activity: 1148
Merit: 237
Shooters Shoot...
that is a goog question. is it possible to speed that up, comparing two list with millions of lines?

If you sort the file and use binary search to look for each item in your list then your runtime becomes O(log2(n)) for each  for each entry and so you're going to have a maximum of O(NumberOfAddresses*log2(n)) as your worst case runtime. It's really not slow, that's about 30 units of time to search for an address in a list with 1,000,000,000 lines in it.

Actually fitting all that into memory is going to be a problem though. There are some algorithms I read in a Knuth book about on-disk sorting but they're very old and I think they may have to be adapted for hard disks instead of tapes.

You can't search for a h160 address in a 300gb file, you must map the file to binary. Then the file will be 10gb, and it just takes a second to yes/no whether that h160 is in the list of 300M addresses.

Early brainflay github had a tool called 'binchk' using xed ( linux ) you convert the 300gb file to uniq-hex, and get the .bin file, use binchk

Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.

The bloom-filter is super fast, can work on GPU, but the false-positive is high.

No point in using binary search on text, just use the model described.

Today when I scrape the blockchain, I get about 300M addresses, but after you do the "sort -u", it will be slightly less, but you also need to run a client on the memory pool, to constantly add the new addresses to the bloom-filter.

If you comparing lists with 300M lines of hex, and/or search its much better to use bloom-filters and binary search combined, drops the 2+ hour search to seconds.
So you are storing and searching 300 Million addresses? How many of those have a balance? Are you just looking for a H160 collision?  Curious to know why you are doing what you are doing; doesn't seem like the typical trying to find balances.
member
Activity: 182
Merit: 30
We already incurred a speed penalty just to make it work for RTX cards and the last thing everyone needs is a even slower program just to get it to run on Linux.

Then you are doing something wrong.
My first RTX 3060 card has arrived and I will make a fix in bitcrack sp-mod #6 (windows) with fullspeed wine support.
spminer #2 for vertcoin has already been released with rtx 3060 support. Mine fullspeed on x1 riser cables and latest drivers without NVIDIA blocking you.  Grin

Any updates?

??

My RTX-3070's are doing over 2,000M/sec for cracking btc private keys and matching known valuable addresses using bloom filters. Same algo's on Gtx-1060 is about 200MB/sec

On a rack of 4 RTx-3070's, I'm seeing 10,000M/sec, using 32gb bloom filter with from 300M addresses (H160), my false positive is 10^-30

I only use linux

The main thing is to put the bloom-filter inside of bitcrack, so your not just looking for one public-key, and/or address, but your testing all 300M in parallel on every cycle ( here 10,000M/sec ), so the probablity of a hit in the secp256k1 space 2^256, or 10^76 is reasonable

Keep the bloom filter on its own M.2 drive on MOBO, have 4tb sata for hex-files, and private key database ( you need to be able to map your found hex160 )

I have two racks one using gtx-1060's the other Rtx-3070's, which I picked up last summer for $500/each, now they're +$1,000 if you can find them. CPU needs 64GB, I'm running amd threadripper with 32 core, this is not a game for windows.

Real problem here is setting up the bloom-filter, orginal model in brainflayer was 512MB, which only allows about 15M addresses, before false-positive goes astro. Solution is to cascade 4gb junks in series, as most shared memory models only allow 4gb chunks, 3 years ago using open-gl I had the bloom filters on the GPU ( gtx-1070 ), but you can only have 2gb chunks, and its actually faster to do blocks of 2048 private-keys, and then have each gpu core pass them back to threads all running cpu cores on the shared bloom-filter. With 300M valid btc  addresses, you really need a 32gb bloom filter. None of this stuff is online, you must roll your own.



that is a goog question. is it possible to speed that up, comparing two list with millions of lines?

If you sort the file and use binary search to look for each item in your list then your runtime becomes O(log2(n)) for each  for each entry and so you're going to have a maximum of O(NumberOfAddresses*log2(n)) as your worst case runtime. It's really not slow, that's about 30 units of time to search for an address in a list with 1,000,000,000 lines in it.

Actually fitting all that into memory is going to be a problem though. There are some algorithms I read in a Knuth book about on-disk sorting but they're very old and I think they may have to be adapted for hard disks instead of tapes.

You can't search for a h160 address in a 300gb file, you must map the file to binary. Then the file will be 10gb, and it just takes a second to yes/no whether that h160 is in the list of 300M addresses.

Early brainflay github had a tool called 'binchk' using xed ( linux ) you convert the 300gb file to uniq-hex, and get the .bin file, use binchk

Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.

The bloom-filter is super fast, can work on GPU, but the false-positive is high.

No point in using binary search on text, just use the model described.

Today when I scrape the blockchain, I get about 300M addresses, but after you do the "sort -u", it will be slightly less, but you also need to run a client on the memory pool, to constantly add the new addresses to the bloom-filter.

If you comparing lists with 300M lines of hex, and/or search its much better to use bloom-filters and binary search combined, drops the 2+ hour search to seconds.

[moderator's note: consecutive posts merged]
member
Activity: 272
Merit: 20
the right steps towerds the goal
We already incurred a speed penalty just to make it work for RTX cards and the last thing everyone needs is a even slower program just to get it to run on Linux.

Then you are doing something wrong.
My first RTX 3060 card has arrived and I will make a fix in bitcrack sp-mod #6 (windows) with fullspeed wine support.
spminer #2 for vertcoin has already been released with rtx 3060 support. Mine fullspeed on x1 riser cables and latest drivers without NVIDIA blocking you.  Grin

Any updates?

??
jr. member
Activity: 36
Merit: 3
We already incurred a speed penalty just to make it work for RTX cards and the last thing everyone needs is a even slower program just to get it to run on Linux.

Then you are doing something wrong.
My first RTX 3060 card has arrived and I will make a fix in bitcrack sp-mod #6 (windows) with fullspeed wine support.
spminer #2 for vertcoin has already been released with rtx 3060 support. Mine fullspeed on x1 riser cables and latest drivers without NVIDIA blocking you.  Grin

Any updates?
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Google  "NVIDIA GPU Computing Toolkit" then download from nvidia
You need install v8.0,  v10.2, v11.2  if you want to compile  Kangaroo/Bitcrack/VanitySearch 
VC_CUDA8/Kangaroo.sln
VC_CUDA10/Kangaroo.sln
VC_CUDA102/Kangaroo.sln

See this is what I want to fix. Having to download three different CUDA versions just to compile Bitcrack is absurd, when you consider that only one of them is needed.

I wonder if the 8.0 and 10.2 versions are only used in the debug mode or 32-bit configurations? Who even runs 32-bit Bitcrack anymore Huh NVIDIA drivers only run on 64-bit Windows now and so having 32-bit targets just sounds redundant to me.
jr. member
Activity: 82
Merit: 8
Huh? It simply means you don't have 10.2 loaded on your machine. No hackery needed. Just download 10.2

Or change the reference to whichever CUDA you are running. 10.0, 10.1, 11.0, 11.1, etc. Not sure it will compile with the 11s though.

Why should I have to download another CUDA version when I already have 11.2? BTW I managed to get it working on Linux CUDA 11.2 so I don't see how Windows should be any different.

I hate how the solution properties are extracted out to a separate file that I can't even find though.  Angry

Google  "NVIDIA GPU Computing Toolkit" then download from nvidia
You need install v8.0,  v10.2, v11.2  if you want to compile  Kangaroo/Bitcrack/VanitySearch 
VC_CUDA8/Kangaroo.sln
VC_CUDA10/Kangaroo.sln
VC_CUDA102/Kangaroo.sln

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Huh? It simply means you don't have 10.2 loaded on your machine. No hackery needed. Just download 10.2

Or change the reference to whichever CUDA you are running. 10.0, 10.1, 11.0, 11.1, etc. Not sure it will compile with the 11s though.

Why should I have to download another CUDA version when I already have 11.2? BTW I managed to get it working on Linux CUDA 11.2 so I don't see how Windows should be any different.

I hate how the solution properties are extracted out to a separate file that I can't even find though.  Angry
member
Activity: 406
Merit: 47
Can possible port BitCrack full version to BitCrack Lite version for small version cpu and port again to mobile device, android for convert old mobile device unused to be BitCrack android

That's a bad idea. Continuous CPU usage of phones while they're charging will deplete the battery's maximum capacity. Besides a phone doesn't have enough memory to run it anyway, even when I use -b 1 -t 32 -p 1 I'm still using about 100MB+ of video card memory (not RAM, remember that. Embedded GPU chips don't have that much VRAM).

I just think how can useful some mobile phone unused

but if can port to be lite version for using on low memory low space or no GPU device may be help to can use on many device
Pages:
Jump to: