We already incurred a speed penalty just to make it work for RTX cards and the last thing everyone needs is a even slower program just to get it to run on Linux.
Then you are doing something wrong.
My first RTX 3060 card has arrived and I will make a fix in bitcrack sp-mod #6 (windows) with fullspeed wine support.
spminer #2 for vertcoin has already been released with rtx 3060 support. Mine fullspeed on x1 riser cables and latest drivers without NVIDIA blocking you.
Any updates?
??
My RTX-3070's are doing over 2,000M/sec for cracking btc private keys and matching known valuable addresses using bloom filters. Same algo's on Gtx-1060 is about 200MB/sec
On a rack of 4 RTx-3070's, I'm seeing 10,000M/sec, using 32gb bloom filter with from 300M addresses (H160), my false positive is 10^-30
I only use linux
The main thing is to put the bloom-filter inside of bitcrack, so your not just looking for one public-key, and/or address, but your testing all 300M in parallel on every cycle ( here 10,000M/sec ), so the probablity of a hit in the secp256k1 space 2^256, or 10^76 is reasonable
Keep the bloom filter on its own M.2 drive on MOBO, have 4tb sata for hex-files, and private key database ( you need to be able to map your found hex160 )
I have two racks one using gtx-1060's the other Rtx-3070's, which I picked up last summer for $500/each, now they're +$1,000 if you can find them. CPU needs 64GB, I'm running amd threadripper with 32 core, this is not a game for windows.
Real problem here is setting up the bloom-filter, orginal model in brainflayer was 512MB, which only allows about 15M addresses, before false-positive goes astro. Solution is to cascade 4gb junks in series, as most shared memory models only allow 4gb chunks, 3 years ago using open-gl I had the bloom filters on the GPU ( gtx-1070 ), but you can only have 2gb chunks, and its actually faster to do blocks of 2048 private-keys, and then have each gpu core pass them back to threads all running cpu cores on the shared bloom-filter. With 300M valid btc addresses, you really need a 32gb bloom filter. None of this stuff is online, you must roll your own.
that is a goog question. is it possible to speed that up, comparing two list with millions of lines?
If you sort the file and use binary search to look for each item in your list then your runtime becomes O(log2(n)) for each for each entry and so you're going to have a maximum of O(NumberOfAddresses*log2(n)) as your worst case runtime. It's really not slow, that's about 30 units of time to search for an address in a list with 1,000,000,000 lines in it.
Actually fitting all that into memory is going to be a problem though. There are some algorithms I read in a Knuth book about on-disk sorting but they're very old and I think they may have to be adapted for hard disks instead of tapes.
You can't search for a h160 address in a 300gb file, you must map the file to binary. Then the file will be 10gb, and it just takes a second to yes/no whether that h160 is in the list of 300M addresses.
Early brainflay github had a tool called 'binchk' using xed ( linux ) you convert the 300gb file to uniq-hex, and get the .bin file, use binchk
Brainflayer needed to use this because the 512mb bloom-filter they use, only allowed about 10M addresses before false-positive went astro, with the binchk you can take the false positives and very if any of them are real postive.
The bloom-filter is super fast, can work on GPU, but the false-positive is high.
No point in using binary search on text, just use the model described.
Today when I scrape the blockchain, I get about 300M addresses, but after you do the "sort -u", it will be slightly less, but you also need to run a client on the memory pool, to constantly add the new addresses to the bloom-filter.
If you comparing lists with 300M lines of hex, and/or search its much better to use bloom-filters and binary search combined, drops the 2+ hour search to seconds.
[moderator's note: consecutive posts merged]