Author

Topic: Obsolete gtx1060-3gb GPUS can find valuable BTC keys 250 MH/Sec, 1000TH (Read 1043 times)

full member
Activity: 1162
Merit: 237
Shooters Shoot...
Just tinkering around with small changes to original VanitySearch, here are some numbers:

Code:
[ Combination Speed 61864.26 TH/s ][Combinations Checked 2^66.29] [Found 0]

So with 5 cards and 39 million addresses (with no bloom filter yet), I am getting almost 62 Petahashes per second.

Program is using right at 6.5GB RAM, constant, so here is my question to @btc-room101, with your configuration, how much constant RAM is used?

GPUs obviously lost speed with that many addresses.  If I only wanted a constant set number of addresses (not update and sync with chain), what would be the easiest most efficient way to set it up with bloom filter?

You got a windows version of it?
It's just JLPs original VanitySearch; I tweaked the text and math to show what it's actually doing.

Going to try to tweak code to include prebuilt h160s inside the program versus reading an input file.  And then figure out a way where the GPU doesn't lose speed.
full member
Activity: 706
Merit: 111
Just tinkering around with small changes to original VanitySearch, here are some numbers:

Code:
[ Combination Speed 61864.26 TH/s ][Combinations Checked 2^66.29] [Found 0]

So with 5 cards and 39 million addresses (with no bloom filter yet), I am getting almost 62 Petahashes per second.

Program is using right at 6.5GB RAM, constant, so here is my question to @btc-room101, with your configuration, how much constant RAM is used?

GPUs obviously lost speed with that many addresses.  If I only wanted a constant set number of addresses (not update and sync with chain), what would be the easiest most efficient way to set it up with bloom filter?

You got a windows version of it?
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Just tinkering around with small changes to original VanitySearch, here are some numbers:

Code:
[ Combination Speed 61864.26 TH/s ][Combinations Checked 2^66.29] [Found 0]

So with 5 cards and 39 million addresses (with no bloom filter yet), I am getting almost 62 Petahashes per second.

Program is using right at 6.5GB RAM, constant, so here is my question to @btc-room101, with your configuration, how much constant RAM is used?

GPUs obviously lost speed with that many addresses.  If I only wanted a constant set number of addresses (not update and sync with chain), what would be the easiest most efficient way to set it up with bloom filter?
full member
Activity: 1179
Merit: 131
I am getting the following error when running all-blocks.  Any idea might be causing this? 

Code:
ubuntu@ip-172-31-44-174:/data/all-blocks$ python3 all-blocks.py
getblockcount =  686852
height =  686841
Traceback (most recent call last):
  File "all-blocks.py", line 176, in
    al=addrlistVout( txid )
  File "all-blocks.py", line 113, in addrlistVout
    raw = rpc_connection.getrawtransaction(txid)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/bitcoinrpc/authproxy.py", line 141, in __call__
    raise JSONRPCException(response['error'])
bitcoinrpc.authproxy.JSONRPCException: -5: No such mempool transaction. Use -txindex or provide a block hash to enable blockchain transaction queries. Use gettransaction for wallet transactions.

Nevermind, I didn't realize txindex was a setting that needed to be enabled.  Like you said, stop thinking and start tinkering  Grin
full member
Activity: 1179
Merit: 131
I am getting the following error when running all-blocks.  Any idea might be causing this? 

Code:
ubuntu@ip-172-31-44-174:/data/all-blocks$ python3 all-blocks.py
getblockcount =  686852
height =  686841
Traceback (most recent call last):
  File "all-blocks.py", line 176, in
    al=addrlistVout( txid )
  File "all-blocks.py", line 113, in addrlistVout
    raw = rpc_connection.getrawtransaction(txid)
  File "/home/ubuntu/.local/lib/python3.8/site-packages/bitcoinrpc/authproxy.py", line 141, in __call__
    raise JSONRPCException(response['error'])
bitcoinrpc.authproxy.JSONRPCException: -5: No such mempool transaction. Use -txindex or provide a block hash to enable blockchain transaction queries. Use gettransaction for wallet transactions.
full member
Activity: 1179
Merit: 131
I've been reading this for a few hours tonight.  Quick question, and this is probably stupid, but once the bloom-filter is created, is the new-block routine high cpu and storage intensive?  Reading over the requirements of creating the 8 GB bloom-filter to begin with, I am wondering if this is something I can generate in AWS and then download and host locally.


F*CK AWS, do you realize it would be $5 to download the blf file every 15minutes, are you going to have a full-node of bitcoin&electrum server also running on the AWS? So now your paying AWS what $14,500/month  ( just for file downloads), then another $1k/month for super-computer server on AWS, for something you can do on old free computers from the junk store? R U serious, that nobody actually does anything anymore at home?


I get that, but if I read what you said correctly, the super computer is only needed once, to generate the bloom filter.  After that the mining rigs can run the new-block script.  Is that correct?  My thinking was to use AWS to generate the initial 8 GB bloom file, download it, then have a mining rig and bitcoind/electrum server locally.  and then like you said, the mining rig updates the bloom filter every 12 hours
member
Activity: 182
Merit: 30
I've been reading this for a few hours tonight.  Quick question, and this is probably stupid, but once the bloom-filter is created, is the new-block routine high cpu and storage intensive?  Reading over the requirements of creating the 8 GB bloom-filter to begin with, I am wondering if this is something I can generate in AWS and then download and host locally.


U really need to quit thinking, and start tinkering.

There are two things here like all this crap. U need a super computer, thread-ripper, 32+gb ram, and lots of NVME's to make your bloom filters and collect your bitcoin addresses and fill your bloom filters.

Then you need mining rigs, one card or a set, that had at least 8gb; The bloom-filter on the mining rigs is kept in an NVME drive, but it gets updated every 12 hours, I keep a task that harvests new bitcoin addresses from the mempool, and then every 12 hours add'd them to the bloom filter, now you could do that on the mother system that made the initial bloom filter, but I find it better to ...

1.) Have one old system that runs bitcoin-core, and electrum-server that hands off bitcoin-addresses from the pool. The electrum server is to add found priv-keys to the wallets on each mining rig. The 'loop2.sh' mining script, has a section where any found bitcoin's are swept into the wallet. Note there are also binchk routines that kick out false-positives from the bloom filter. ( Make sure your old computer has a SSD +1TB for the bitcoin blockchain, otherwise the RPC will be too slow for the mining rigs to harvest addresses&priv-keys)

2.) A super computer to make the bloom-filters and manage the source code (C++/python), I define a super-computer as +32gb RAM, +12 core ripper; +2  NVME 4x16 (128gb ok for bloom, but 1TB for processes )


3.) Mining rigs i3s' 4core  are ok, +8gb, while the process 'mining' only uses 900mb on gpu&cpu, when you update the bloom-filter ( which why I only do every 12 hours ), because the code uses share-memory, then the cpu needs 8gb, so 12gb would be best on each miner. ( every 12 hours, because the 300k new address add  to the bloom-filter, takes about 30min on the mining rig, and during that time the process speed drops from 400MH/sec to 60mh/sec, so I don't want to do it too often ).

4.) For 'testing' you could just have any old gpu card on the 'super-computer' for validation.

The 'new-block' routine as of right now my way, is to run it on each miner, I tried before running it on the bitcoin-core server, but what happens is the bitcoin&electrum servers ground to a halt, during the bloom-filter update ( which requires +8gb RAM, so the system becomes cache  locked ); The thing is I don't want my bitcoin&electrum servers to ever stop, otherwise the entire system fails. ( The mining machines are running new-block, they're expecting a good RPC connection ), the mining routine is also expecting a good electrum-rpc to sweep priv-keys.

So now I have 'new-block', which for those who haven't READ, its a batch-file that calls python to collect all the new addresses from mining-pool every 15min(new block sleep); About 4,000-12,000 new addresses; they're concatentated and 'sorted uniq', of which I see about 10% reduction, so most are new addresses and no-duplicated.

Adding 300,000 new addresses to the bloom-filter on the 'super-computer' ( 32gb, 24core ) is a 5 minute task, but on a mining-rig it can take an hour, which is why I do it on the rigs, if you do it on the bitcoin-core server, then all the RPC calls halt and the entire system suspends.

Probably the best way would be to have the super computer update the bloom-filter every 15 minutes, and then it transfer by "SSH-COPY" the new bloom to the NVME's on all the miners. But I don't do this now, because I used to do it years ago, and now my super-computer has been re-tasked to more interesting projects.

So in summary, normal mining rig with 1+ gpus runnning the miner processing the bloom-filter, on a gtx-1060-3gb I'm now seeing about 400MH/Sec on one card, it was 250MH tops before, the new speedup came from adding more memory to the mining-rig ( asus-370p) was 4gb, now 8gb, I think 12gb would be ultimate. This memory above 1GB is only used during the 'bloom-filter update process', called "hex2blf8 new-address.hex monster8.blf"

...

I don't see how reading can help,  there are 12 components, each must be played with and learned both reading the source, and studying IN&OUT mappings, getting the entire system running, requires that all components are working. The problem is most stuff like 'brain-flayer' which is just one engine with cmd-line switches can be understood by reading the 'readme', but here its 12 engine components that must all be working 100% and communicating.

NOTHING is CPU intensive, all computation is done on the GPU's in the mining phase.
creating 8gb bloom filters and updating them requires +8gb RAM, and multi-core cpus


Again, unless you actually stop reading, and start running these tasks,your not going to 'get it', I would suggest one system to begin with, with 32GB of RAM, and lots of NVME drives on pcie-4x12 slots; U also of course need to have another machine running as bitcoin-core full-node txindex, and the electrum server, this machine is non-cpu, and 4gb of ram is fine, just don't have anything else running your bitcoin/electrum server.

...


In summary, in a perfect environment for a professional dedicated to this task. I would have three computers, and thread-ripper with tons of memory to update and create BLF(bloom filter), an old 4gb 6core amd for running bitcoin&electrum server, note be sure to use 1GB LAN to communicate all computers. Then the miners would need just a standard gpu mining MOBO with 4core intel(i3 ok), and +8gb would be ideal for ram, note that each mining rig needs 1 NVME 4x16 pcie near the cpu, at least 128gb, as it only contains this ONE BLF, nothing else, getting high speeds only works if the NVME is only used for this one task of sharing the bloom filter with gpu boards mining/hacking btc.


Right now 'my way' is to use old-computers and un-used mining rigs that I have set aside, the mining rig fetches addresses&and sweep priv-keys from the bitcoin/electrum server. The only thing I have to check is the electrum client to see what the system has found.

Remember I have already worked +5 years on this project, I'm moving on; so my allotment of resources is what works for me, for somebody just starting out, I would do most stuff on the super-computer, because its fast; These days I use all my super-computers for ECDLP math problems,

F*CK AWS, do you realize it would be $5 to download the blf file every 15minutes, are you going to have a full-node of bitcoin&electrum server also running on the AWS? So now your paying AWS what $14,500/month  ( just for file downloads), then another $1k/month for super-computer server on AWS, for something you can do on old free computers from the junk store? R U serious, that nobody actually does anything anymore at home?
full member
Activity: 1179
Merit: 131
I've been reading this for a few hours tonight.  Quick question, and this is probably stupid, but once the bloom-filter is created, is the new-block routine high cpu and storage intensive?  Reading over the requirements of creating the 8 GB bloom-filter to begin with, I am wondering if this is something I can generate in AWS and then download and host locally.
full member
Activity: 1179
Merit: 131
You guys should stop whining, and start listening to him, his work is fine, and he is to,
start learning to program, or start providing answers, and stop banning him, and deleting his thread like that,
nough said


Amen to this.  Not sure why there is so much hate for btc-room's posts.  Clearly they are more technical and over a lot of peoples' heads, yet many people love to argue.  None of these concepts are new, this isn't an invention of his imagination.  For people who are interested in learning, stuff similar to this has been discussed for years:  https://youtu.be/foil0hzl4Pg

https://github.com/brichard19/BitCrack
https://github.com/Telariust/pollard-kangaroo

And for anyone who doubts what he says, here is a 67 page post on bitcointalk:  https://bitcointalksearch.org/topic/bitcrack-a-tool-for-brute-forcing-private-keys-4453897   Maybe go take out your aggression on these guys too.




NOT TRUE, bitcrack, and kangaroo search for ONE address ( private key pair ) at a time, here using the 8gb bloom filter, all 300MILLION BTC addresses are scanned on each cycle at 250MH/sec, so 250MH * 300M is 1,000TH equivalent of the other methods.


Nobody is doing this anywhere, the bitcrack method is 1 in 10e77 odd's of hitting a private-key match



This method is down to 1 in 10E18, if you apply a mining rig then you can find btc addresses in days, solo 1060-3 is weeks

If you run this method on racks of RTX-3070's can you can find btc priv-key pairs all day long.

...

SO bitcrack just randomly (guesses) a private-key, and then looks to see if it hit the address your looking for, your odds are the same as finding a lost electron in the known universe.

Pollard-Kangaroo requires that you estimate  the correct key with a space of 2^40, in essence you must already known some 80% of the private keys leading digits, otherwise it is useless, again it can only search for one private-key pair address at a time.

Right.  I wasn't saying your method was exactly like that, but there seem to be people in this thread who think the idea is completely implausible.  I just wanted to show that this is a legitimate concept
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
NOT TRUE, bitcrack, and kangaroo search for ONE address ( private key pair ) at a time, here using the 8gb bloom filter, all 300MILLION BTC addresses are scanned on each cycle at 250MH/sec, so 250MH * 300M is 1,000TH equivalent of the other methods.

Nobody is doing this anywhere, the bitcrack method is 1 in 10e77 odd's of hitting a private-key match



This method is down to 1 in 10E18, if you apply a mining rig then you can find btc addresses in days, solo 1060-3 is weeks

If you run this method on racks of RTX-3070's can you can find btc priv-key pairs all day long.

...

SO bitcrack just randomly (guesses) a private-key, and then looks to see if it hit the address your looking for, your odds are the same as finding a lost electron in the known universe.

Pollard-Kangaroo requires that you estimate  the correct key with a space of 2^40, in essence you must already known some 80% of the private keys leading digits, otherwise it is useless, again it can only search for one private-key pair address at a time.
I will have to disagree with you. Bitcrack and VanSearch both can search for multiple addresses at one time, both compressed and uncompressed. They both take the given address input file and store their applicable Hash160s; and then for every private key visited, it checks the hash160 for all inputted addresses.  I can run VanSearch with 20+ million addresses loaded, checking both uncompressed and compressed at the same time, for every address in input file, but I lose overall MKey/s speed. I liked your bloom filter to SSD idea because, according to you, the GPU takes no hit on speed.

Kangaroo can search for multiple pubkeys, you have to tweak the program.  I don't understand what you mean by "a space of 2^40" because I can find a key in the 2^80 range in 2 to 3 minutes using Kangaroo.  2^40 range takes less than a second, but do does using a brute force program.

I hope to get caught up on reading and implementing your bloom filter idea soon.
member
Activity: 105
Merit: 22




have you hacked this address from 2011?
310 btc have been hacked
almost 10 years abandoned

https://bitinfocharts.com/bitcoin/address/1BkVazubQAtVfbsnJwArjV3qvRNEiZqTWx


you can follow up here
the pirate sent btc to multiple addresses
and from those directions to other directions many times to try to lose track

https://www.blockchain.com/btc/address/1BkVazubQAtVfbsnJwArjV3qvRNEiZqTWx
member
Activity: 182
Merit: 30
You guys should stop whining, and start listening to him, his work is fine, and he is to,
start learning to program, or start providing answers, and stop banning him, and deleting his thread like that,
nough said


Amen to this.  Not sure why there is so much hate for btc-room's posts.  Clearly they are more technical and over a lot of peoples' heads, yet many people love to argue.  None of these concepts are new, this isn't an invention of his imagination.  For people who are interested in learning, stuff similar to this has been discussed for years:  https://youtu.be/foil0hzl4Pg

https://github.com/brichard19/BitCrack
https://github.com/Telariust/pollard-kangaroo

And for anyone who doubts what he says, here is a 67 page post on bitcointalk:  https://bitcointalksearch.org/topic/bitcrack-a-tool-for-brute-forcing-private-keys-4453897   Maybe go take out your aggression on these guys too.




NOT TRUE, bitcrack, and kangaroo search for ONE address ( private key pair ) at a time, here using the 8gb bloom filter, all 300MILLION BTC addresses are scanned on each cycle at 250MH/sec, so 250MH * 300M is 1,000TH equivalent of the other methods.


Nobody is doing this anywhere, the bitcrack method is 1 in 10e77 odd's of hitting a private-key match



This method is down to 1 in 10E18, if you apply a mining rig then you can find btc addresses in days, solo 1060-3 is weeks

If you run this method on racks of RTX-3070's can you can find btc priv-key pairs all day long.

...

SO bitcrack just randomly (guesses) a private-key, and then looks to see if it hit the address your looking for, your odds are the same as finding a lost electron in the known universe.

Pollard-Kangaroo requires that you estimate  the correct key with a space of 2^40, in essence you must already known some 80% of the private keys leading digits, otherwise it is useless, again it can only search for one private-key pair address at a time.
full member
Activity: 1179
Merit: 131
You guys should stop whining, and start listening to him, his work is fine, and he is to,
start learning to program, or start providing answers, and stop banning him, and deleting his thread like that,
nough said


Amen to this.  Not sure why there is so much hate for btc-room's posts.  Clearly they are more technical and over a lot of peoples' heads, yet many people love to argue.  None of these concepts are new, this isn't an invention of his imagination.  For people who are interested in learning, stuff similar to this has been discussed for years:  https://youtu.be/foil0hzl4Pg

https://github.com/brichard19/BitCrack
https://github.com/Telariust/pollard-kangaroo

And for anyone who doubts what he says, here is a 67 page post on bitcointalk:  https://bitcointalksearch.org/topic/bitcrack-a-tool-for-brute-forcing-private-keys-4453897   Maybe go take out your aggression on these guys too.

full member
Activity: 431
Merit: 105
You guys should stop whining, and start listening to him, his work is fine, and he is to,
start learning to program, or start providing answers, and stop banning him, and deleting his thread like that,
nough said
member
Activity: 182
Merit: 30

Like i have said, the only thing left to do is derive private-keys straight from public-keys, ....
                                                  ---------------------------------------------------
Two things left. This -------------------------------------------------^
And perpetuum mobile.

Well that would be your circle-jerk invention ( perpetuum mobile jerking contraption ), lots of people are working on ECDLP, 1,000's of paper per year published. Math&Crypto PHD's.
sr. member
Activity: 736
Merit: 262
Me, Myself & I

Like i have said, the only thing left to do is derive private-keys straight from public-keys, ....
                                                  ---------------------------------------------------
Two things left. This -------------------------------------------------^
And perpetuum mobile.
member
Activity: 182
Merit: 30
Thanks for the response.  

Curiosity question; do you notice a speed difference if you have say a 1Gb bloom versus the full 8Gb bloom you are running? Or is it the same speed no matter how large/how many h160s the bloom contains?

I will try and start small; create a smaller bloom first (30M h160s), run it, see if I notice a speed increase and make sure I understand/get the process right.

Typically there are 16 tests, whether its a 512mb bloom or a 128gb bloom, so the speed would be the same.
The larger bloom just means a larger canvas, so when you throw the dart ( your guess ), its better to hit white-space, than a dot indicating a mark,

The combinatorial nature of blooms is that the number of tests is a quality factor. The thing is once you get above 4gb, then you have a chunk problem as linux memory only support 32bit address chunks, typically with bloom the 256 number or private-key would be four 64 bit address into the bloom, which is normalized with remainder function ; Each of the 4  has its own permutation as the 64bit can be rotated 64 times, so you could in fact have 4 * 64, or 256 tests, but 16 is fine, here of the four sections, just creating  4x deterministic markers in the bloom. For a 8 gb I just do 2+2 using the lower 1/2 of the 256 bit entry for the low/high halves, you could do the same for a 32gb bloom

With only 300M bitcoin addresses I have that the 8gb is fine.

I find that for most people the problem here is approaching the data management problem, of creating the 300M h160 hex addresses, so they have data to operate on, first they need to have a bitcoin-coin full node with tx, up & running, they need to install the python rpc routines, and run the component source, but  most people can't even find the on/off switch on their computer, let alone running the full node.

Impossible to provide data as github limits 100MB for a project. The total data I use is about 10TB for hacking-btc, but given that CHIA mining needs 500TB, then 10TB today sounds like baby data.

Then of course once you have 300MB you have to sort and make the addresses 'unique' in order to build binary sort engines 'xxd', the bloom filter only can ascertain 1 in a trillion-trillion, but to know absolute you need to do a binary search with minimal computation you can't use 'grep' for looking up an address. But sorting a 300M file is a major task on a PC, you need fast cpu, and large RAM

Once you have the data, and have created the .bin files & .blf files, you can run the miner, the .blf get you that 1 in 10E24, but for a yes/no you need the .bin ( xxd );

I used to just ring a bell everytime an address was found that had btc, as I have enlarged the bloom-filters beyond 16gb, and extended the blockchain database scraping to all btc addresses every used, I  find very little noise, so now can just have the few addresses found just 'sweep' the key with electrum-server. But most of the time its just dust, which is to be expected.

I think this approach, will be most useful once the ETH people repurpose away from ETH mining, to btc hacking, as a rack of rtx-3070's can do an enormous amount of hash like 2,000MH/s * 300M, which is 600,000TH/s, times 6 rig, 3.6 EH/s, 3.6 million TH; Somebody with a room of gpu rigs, could really clear  out bitcoin;

I'm still just running gtx-1060-3gb at 250MH/s * 300M, which is 75,000TH/s, still better than running a btc miner at 80th

I suspect that once a few people take this approach, there will be a massive loss of trust in bitcoin and the dev will finally get of their arse and make it stronger, but perhaps not, perhaps like ostriche they'll just stay in denial, and watch everything just dissapear.

...

I do have rtx-3070's but I use them for ETH, I don't want to push the btc, thing IMHO if I push this I don't want to be the one setting on tons of 'lost btc', just brings the wrong kind of attention, I'm done, it was always a proof of concept endeavor, I proved it, it works, time to move on. The only thing left is fine-tuning the automated addition of new addresses to the bloom-filter ( that code is on the github ), and automatically  sweeping 'found' addresses with electrum server. I did find that it didn't like 100K private-keys getting swept, so I have backed off to just one key at a time through rpc calls, but just for fun I wanted to know how it would handle 100K private-key, and it didn't like it.

Like I said, years ago I would just ring a 'bell' when a hot-address(key-pair) was found, but now I can't be bothered, so I just let the stuff run 24/7 & and auto-sweep, and really don't pay much attention to it all. Once your running full rigs on this stuff, you need to be more involved because your generating 100X or more data and getting lots of positives, as I don't really care about find btc in the first place this has all become rather boring.

Like i have said, the only thing left to do is derive private-keys straight from public-keys, but that involves a lot more work, and strategy; It's more of a puzzle problem, while the bloom-filter gpu, is just rolling the dice a trillion-trillion times a second & looking for wins. While the ECDLP is more elegant & about math.
member
Activity: 182
Merit: 30
All talk and no examples of anything, getting tired of all this theory talk.

There are dozens of steps & modules in this process, its like cooking gourmet meal for a 1,000 people

Your the type of person that wants a GUI, and his hand held going to the bathroom.

All my stuff is cmd-line, like any hacking of cryptography its a long and tedious process, I have no intention of automating and black boxxing for morons.

I will provide components and explain in detail how to use each component, but I will only do this with people who are not morons.

Obviously, if you ain't already got a MS in physics or math, and ain't a python/c++ guru, and if you ain't got 10+ years experience in LINUX, no source on earth is going to help you, nobody helped me get this stuff going I did it on my own, and if smart people want to take this to the next step, that's good with me.

If you didn't spend 20+ years as a sw developer doing networking, os, graphics, device-drivers, .. then you aint' going to get any of this ever.

What moron 'talk' the entire thing is to teach people to think the right way about hacking bitcoin. GARWIN had a great quote "The hardest thing about dropping a-bomb was that once it was dropped they couldn't deny it existed", same here once everybody know's how easy it is to hack bitcoin, then the mythology is game-over.



Garwin was the presidents advisor on atomic physics for 50+ years, he's probably dead now.

Theory is way more interesting to me than 'code', you say your tired of 'thinking' you must be a first class un-educated fool. Why are you even posting? Don't you have a computer game to play?

Dude you are a scammer, and your programs have malware in it. Those little projects that you just uploaded that probably don't even work cause you was trying to sell those same ones over a year ago. Now they are free, I wonder why.....still talk that same gibberish theory with those crappy programs you got and I don't remember trying to use your programs because you was trying to sell them. Now all of a sudden uploading them for free. Keep talking crazy, your account will be gone. You the one brought up the word moron, I could go on and on about you.

All his post are FUD with no sources like always, dont know how he keep ranting about btc, just posting jibberish and not backing anyword, as you said i dont know how this guy can keep posting even  when they ban you for less..
Did you even go to the github site, the sources are there, I posted last week, about 15 components to the entire system. Is this what you do is post crap, without even having gone to the site to study and/or test the code ( 100% open python & c++ )

I normally don't respond to morons, but I felt that some of the handicapped have special needs

https://github.com/room101-dev/Grand-Ultimate-BTC-Hacker/tree/master

All the code is there, but idiots & morons can do nothing with code, as it takes real systems knowledge to even approach this material.
copper member
Activity: 41
Merit: 0
All talk and no examples of anything, getting tired of all this theory talk.

There are dozens of steps & modules in this process, its like cooking gourmet meal for a 1,000 people

Your the type of person that wants a GUI, and his hand held going to the bathroom.

All my stuff is cmd-line, like any hacking of cryptography its a long and tedious process, I have no intention of automating and black boxxing for morons.

I will provide components and explain in detail how to use each component, but I will only do this with people who are not morons.

Obviously, if you ain't already got a MS in physics or math, and ain't a python/c++ guru, and if you ain't got 10+ years experience in LINUX, no source on earth is going to help you, nobody helped me get this stuff going I did it on my own, and if smart people want to take this to the next step, that's good with me.

If you didn't spend 20+ years as a sw developer doing networking, os, graphics, device-drivers, .. then you aint' going to get any of this ever.

What moron 'talk' the entire thing is to teach people to think the right way about hacking bitcoin. GARWIN had a great quote "The hardest thing about dropping a-bomb was that once it was dropped they couldn't deny it existed", same here once everybody know's how easy it is to hack bitcoin, then the mythology is game-over.



Garwin was the presidents advisor on atomic physics for 50+ years, he's probably dead now.

Theory is way more interesting to me than 'code', you say your tired of 'thinking' you must be a first class un-educated fool. Why are you even posting? Don't you have a computer game to play?

Dude you are a scammer, and your programs have malware in it. Those little projects that you just uploaded that probably don't even work cause you was trying to sell those same ones over a year ago. Now they are free, I wonder why.....still talk that same gibberish theory with those crappy programs you got and I don't remember trying to use your programs because you was trying to sell them. Now all of a sudden uploading them for free. Keep talking crazy, your account will be gone. You the one brought up the word moron, I could go on and on about you.

All his post are FUD with no sources like always, dont know how he keep ranting about btc, just posting jibberish and not backing anyword, as you said i dont know how this guy can keep posting even  when they ban you for less..
member
Activity: 182
Merit: 30
Thanks for the response. 

Curiosity question; do you notice a speed difference if you have say a 1Gb bloom versus the full 8Gb bloom you are running? Or is it the same speed no matter how large/how many h160s the bloom contains?

I will try and start small; create a smaller bloom first (30M h160s), run it, see if I notice a speed increase and make sure I understand/get the process right.

Well 1GB will let you check 30-50M high-value btc addresses. Original brainflayer was 512MB, 15M addresses.

The speed is the same, the checks are both exactly the same, all that can be said is that one there are different RAM in-memory requirements.

The 8gb is to do all 300M known btc addresses, all at once.

Study the source code for the bloom-filters  (hex2blf) its in the .H files, its just static serial compares, in both cases the run-time test count is the same, so there is little delay difference, given its all in memory that is all equal as well.
full member
Activity: 706
Merit: 111
All talk and no examples of anything, getting tired of all this theory talk.

There are dozens of steps & modules in this process, its like cooking gourmet meal for a 1,000 people

Your the type of person that wants a GUI, and his hand held going to the bathroom.

All my stuff is cmd-line, like any hacking of cryptography its a long and tedious process, I have no intention of automating and black boxxing for morons.

I will provide components and explain in detail how to use each component, but I will only do this with people who are not morons.

Obviously, if you ain't already got a MS in physics or math, and ain't a python/c++ guru, and if you ain't got 10+ years experience in LINUX, no source on earth is going to help you, nobody helped me get this stuff going I did it on my own, and if smart people want to take this to the next step, that's good with me.

If you didn't spend 20+ years as a sw developer doing networking, os, graphics, device-drivers, .. then you aint' going to get any of this ever.

What moron 'talk' the entire thing is to teach people to think the right way about hacking bitcoin. GARWIN had a great quote "The hardest thing about dropping a-bomb was that once it was dropped they couldn't deny it existed", same here once everybody know's how easy it is to hack bitcoin, then the mythology is game-over.

Garwin was the presidents advisor on atomic physics for 50+ years, he's probably dead now.

Theory is way more interesting to me than 'code', you say your tired of 'thinking' you must be a first class un-educated fool. Why are you even posting? Don't you have a computer game to play?

Dude you are a scammer, and your programs have malware in it. Those little projects that you just uploaded that probably don't even work cause you was trying to sell those same ones over a year ago. Now they are free, I wonder why.....still talk that same gibberish theory with those crappy programs you got and I don't remember trying to use your programs because you was trying to sell them. Now all of a sudden uploading them for free. Keep talking crazy, your account will be gone. You the one brought up the word moron, I could go on and on about you.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Thanks for the response. 

Curiosity question; do you notice a speed difference if you have say a 1Gb bloom versus the full 8Gb bloom you are running? Or is it the same speed no matter how large/how many h160s the bloom contains?

I will try and start small; create a smaller bloom first (30M h160s), run it, see if I notice a speed increase and make sure I understand/get the process right.
member
Activity: 182
Merit: 30
Quote
use a baby-step/giant-step algo that runs through public-keys, where I have 100's of 1,000's every 20 minutes randomly search for a new key from that public hash ( of known high-value )

So does your bloom contain addresses or pub keys? Sounds like you have addresses in bloom filter.

If you tweaked vansearch to a modified BSGS, what is the process your program is running?

(Reading your github now)

A bloom filter only has H160 hex-addresses straight from the private-key, can be in comp or un-comp format.

The 'address' can be 1z, 3c, ...any format I use address template to mix match the batch runs, I have a data base of all user-addresses > 1.0 btc for all time, every new run, one of those is used as a 'template' sort of line vanity-search. But once the thing is running, then every 100M private-keys the seed changes so the searches are never the same.

On every 250MH/SEC cycle I check 4 h160 formats lookup into the bloom, the bloom has 300M h160's, so actual match on each cycle is 300M*250M, which is 1TH, but this is using the slowest GPU.

U always use h160 in the bloom filter its the most compact, 'addresses' are just human things, but when you do this it helps it helps to structure private-key space to map to a realistic address.

Read me white paper at the site, on top of this post, this project has two dozen components. Almost all source has been posted on github, but I could have missed something.

More work has been done on mapping 'addresses' to private-key space ( think vanity search ), than say private key to h160  which is just sha-256 & by definition 100% random. Addresses have structure, so I can use ML to train so the machine can search for private-keys in an intelligent way. Hey it works.

I'll repeat that are dozens of components the blind man & elephant is apt here, if you think only about the leg, you'll never see the elephant.

The only thing interesting about the bloom-filter is that it was extremely difficult to get the 8gb to work in memory in real-time, so I could do 9PTH/Sec with a rtx-3070, which generates about one high value private-key per week. Applying this to a rig would bust tons of bitcoins wide open

I used to do this years ago, but my fatal flaw was  not putting the bloom filter on its own NVME m.2 next to the cpu now instead of 20MH/SEC, I'm see 1500MH/sec, then multiply that by 300M h160 ( addresses ).

In summary the bloom filter matches h160 directly mapped from the private key by secp256k1 rules; The usage of human readable 'btc-addresses' is just a training method so that the engine is doing a baby-step/giant-step in a proablistic area for that key/addr pair. It doesn't matter if it finds it because it check 300M others on the same cycle, so its a twofer.

Sorry I have tried to answer your question ten different ways, I hope you get it.

I know this is all a very different way about thinking about hacking btc, its why everybody has failed so far, 99.9% just randomly compare 1 key to one address on each cycle.

I have published this work because its done. My personal work is ECDLP directly mapping public keys to private keys using advanced mathematic & super-computers.

I didn't tweak vanity-search it was completely re-written over the last few years, and about 5 years ago I re-wrote vanity-flayer for opencl, and I also rewrote brain-flayer, which I now call BF3, and its supports 4GB bloom, the original was 512mb.

...

The critical in all this is setting up the 8gb bloom-filter with valid h160 hex from the blockchain. Then deploying the BLF file on its own NVME.  Then shit just sky-rocketed. The GPU went from 10% usage to 99.9%. I used to put the bloom on the GPU, and that was super-fast, but debugging from hell. That code is also included. There must be nothing else on the NVME than this 8gb file, its critical that there is no-busy controller issues, then its just like an 8gb dynamic-ram disk through shared-memory connected to the GPU.

All this is important, because its nothing to get 250MH/sec out of a card, the importance is the 300M *250MH
member
Activity: 182
Merit: 30
2.) add the 8gb bloom filter, note that linux only support 4gb shared memory, so the bloom must be cascaded, note that brain-flayer only used a 512mb bloom, so creating a 8gb bloom is a major task in software engineering

I had PhoenixMiner running with 10gb GPU memory and haven't heard of this limitation. Why map the bloom filter as shared gpu memory in the first place?

Also how are you storing this bloom filter on SSD? Is that what you mean by shared memory, by using something like mmap() on a file?

Well its easy onboard GPU to generate sep256k1 keys, and to map to hash160 addresses, then on the same GPU in theory 1,000MH/sec you look up once in just a few cycles if that h160 is in the hot-list of the bloom-filter.

Problem of course is CUDA doesn't work well, I did get it to work with OPEN-CL, but never got the bloom above 1GB which is big, but not big enough.

When brainflayer came out they had 512mb blooms, but their list of bitcoins that had value was only 15M, now there are 300M, so you need +8gb bloom just to prevent the false positives.

I hope this helps you understand the problem.

The new technique is rather cool, the gpu generates a 10k key-pairs(address-h160/priv-key) to shared memory, the cpu runs the list with the on-board NVME bloom, never having to access a physical disk, the GPU actually runs at almost 100% all the time.

There is general mathematical equations that tell you for N objects how big your bloom-filter must be to reduce your false-positives to x. I shoot for 1 in a trillion-trillion

What I used to do with OPENCL is reduce the list of bitcoin addresses to less greater than 10000 satoshis, but IMHO its better to just keep it open to all addresses ever used in the block chain.

I have two python routines in the github, one is get-all, which scans the blockchain for all addresses ( you can set the satoshi threshold ), and then I have a get-new which is a process that gets the new addresses off the mining pool every 10 minutes. All this updates the bloom filter every hour, which is then shipped back to the miners via /linuxshare on samba drives. The only thing on the NVME on the miner is this bloom, which is share with the mother super computer who is updating the bloom-filter for all her miners.
full member
Activity: 1162
Merit: 237
Shooters Shoot...
Quote
use a baby-step/giant-step algo that runs through public-keys, where I have 100's of 1,000's every 20 minutes randomly search for a new key from that public hash ( of known high-value )

So does your bloom contain addresses or pub keys? Sounds like you have addresses in bloom filter.

If you tweaked vansearch to a modified BSGS, what is the process your program is running?

(Reading your github now)
member
Activity: 182
Merit: 30
All talk and no examples of anything, getting tired of all this theory talk.

https://github.com/room101-dev/Grand-Ultimate-BTC-Hacker/projects

Seriously what will you do with code? I'll start adding all the code, once I map out all the components. I don't want to miss anything, and I want to up front on hw requirements.

While the 'miner' GPU stuff is a fools errand, actually building the bloom-filter, and blockchain-database scrubbing all the 300M addresses, and maintaing memory-pools to update the blf every 10 minutes is critical.


My feeling is that the process ( the idea of it all ) is more important than the implementation. Once people see the entire picture, then I'm sure some 18yr old high-school kid can by reduction reduce it all to your bathroom 'time to understand' criteria. Lot's of people already have some of the components I'm discussing, nobody has it all except those of us who 'rolled our own'

Once all the source is out there, then the high-school kids can take over. It's easy to 'do something',when somebody else hands it to you on a plate.

You must understand the components, before you can understand the 'system',

So many of these components I haven't touched for years I need to go back and find all the source, and remember why it was useful. Lots of this stuff once its done, its done forever, but if you don't have it, you have nothing.

Like they said "If it was easy, everybody would be doing it".


Hacking BITCOIN is like the blind-man an the elephant, some people have the leg and say "Can't be done, this tree will never move", very few see the big picture, my goal is to teach the entire ELEPHANT of bitcoin hacking
.

Before I start releasing code, tell me the order, and ask questions, people ask why these components are required as  I will get bored in a few weeks and not return to this place, I only come around here once every 2-3 years.

I want to return to my math work on ECDLP.
member
Activity: 182
Merit: 30
All talk and no examples of anything, getting tired of all this theory talk.

There are dozens of steps & modules in this process, its like cooking gourmet meal for a 1,000 people

Your the type of person that wants a GUI, and his hand held going to the bathroom.

All my stuff is cmd-line, like any hacking of cryptography its a long and tedious process, I have no intention of automating and black boxxing for morons.

I will provide components and explain in detail how to use each component, but I will only do this with people who are not morons.

Obviously, if you ain't already got a MS in physics or math, and ain't a python/c++ guru, and if you ain't got 10+ years experience in LINUX, no source on earth is going to help you, nobody helped me get this stuff going I did it on my own, and if smart people want to take this to the next step, that's good with me.

If you didn't spend 20+ years as a sw developer doing networking, os, graphics, device-drivers, .. then you aint' going to get any of this ever.

What moron 'talk' the entire thing is to teach people to think the right way about hacking bitcoin. GARWIN had a great quote "The hardest thing about dropping a-bomb was that once it was dropped they couldn't deny it existed", same here once everybody know's how easy it is to hack bitcoin, then the mythology is game-over.

Garwin was the presidents advisor on atomic physics for 50+ years, he's probably dead now.

Theory is way more interesting to me than 'code', you say your tired of 'thinking' you must be a first class un-educated fool. Why are you even posting? Don't you have a computer game to play?
member
Activity: 182
Merit: 30
I didn't say SSD, I said  NVME-4x m.2 next to the CPU, are you retarded?

You also said this in the OP:

If you actually every wrote any cuda source code you know the allocation size, its documented, but you cannot for instance get 8gb contiguous on a 12 gb gpu card, its a limitation of the architecture of that card family your using

OK it makes sense that you can't have it all contiguous. I have tweaked some CUDA in the past but never wrote entire programs out of it, so I haven't paid attention to cuda memory limits.

Nice to know this stuff can be offloaded from the GPU  Smiley

I'm going to start setting up the new github today, I'll provide you a link once there is enough there to start, I have been working on this for five years, its done, and I have moved on. I in the recent year I work on the second thing above actually calculating the private key from public key explicitly. This GPU stuff is just random, you know, but at 9,000TH/SEc that 100's of bitmain s19, doing the same work on one graphic card.

Like I said the lowest thing is this stuff, all this stuff bitcoin-tech is just looping around checking one address at time, there are as many addresses as electrons in the universe, obviously looking for that electron that way is DUMB, which is why the tech bitcoin talk people let that garbage remain, everytime I post stuff they delete it, last time i did a github 3 years ago, I posted ML-RNN how to calc an address to private-key they locked it; They don't want anybody to move forward its all 'woke' here on bitcoin-talk the old guys just want to keep it like it was in 2009 forever.

Like I'm saying I'm more interested in the ECDLP problem, and backdoors using endomorphisms to directly map public to private.

Given the majority of all addresses are now hashed, this GPU miner method is the only way to do this, it must be random, because the sha256 is random,  but you can be clever like I have done, and it does find high-value addresses, it works.

I don't care about money, I do this stuff for fun

I'm only here to show people the light, the path. Given I have already spent ten years hacking btc, I have gone down many blind paths, if anybody wants has a question, please ask, I have no reason to hold anything back. I have said many times, the btc-dev deserve to see their baby die, they have sat on their arse for years and done nothing to make BTC stronger.

Satoshi knew that BTC using sha256(NSA) & secp256k1(NSA) had a ten year limited lifetime, dev didn't move forward they deserve to lose it all.
member
Activity: 182
Merit: 30
All talk and no examples of anything, getting tired of all this theory talk.

You can lead a moron  to water, but you can't make them drink, funny how this works.

I see I triggered you real good, you know you spam the same thing over and over. You will stop posting for a while and start back. You do it every time.

What do you want to know? I have already posted python before, and its the same imbeciles that want their hands held, I take  a different approach now, I tell you how to do it, and smart people can ask questions if they don't understand what I'm saying.
member
Activity: 182
Merit: 30
How are you going you deploy your gpus in the future?
full member
Activity: 706
Merit: 111
All talk and no examples of anything, getting tired of all this theory talk.

You can lead a moron  to water, but you can't make them drink, funny how this works.

I see I triggered you real good, you know you spam the same thing over and over. You will stop posting for a while and start back. You do it every time.
member
Activity: 182
Merit: 30
I didn't say SSD, I said  NVME-4x m.2 next to the CPU, are you retarded?

You also said this in the OP:

If you actually every wrote any cuda source code you know the allocation size, its documented, but you cannot for instance get 8gb contiguous on a 12 gb gpu card, its a limitation of the architecture of that card family your using

OK it makes sense that you can't have it all contiguous. I have tweaked some CUDA in the past but never wrote entire programs out of it, so I haven't paid attention to cuda memory limits.

Nice to know this stuff can be offloaded from the GPU  Smiley

I was very careful to say in the OP to  get 250MH/sec from a gtx-1060 you need to run an NVME m2. near the cpu to get the full pcie-4x bandwidth the SSD is generally sata based and is a dog.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
I didn't say SSD, I said  NVME-4x m.2 next to the CPU, are you retarded?

You also said this in the OP:

If you actually every wrote any cuda source code you know the allocation size, its documented, but you cannot for instance get 8gb contiguous on a 12 gb gpu card, its a limitation of the architecture of that card family your using

OK it makes sense that you can't have it all contiguous. I have tweaked some CUDA in the past but never wrote entire programs out of it, so I haven't paid attention to cuda memory limits.

Nice to know this stuff can be offloaded from the GPU  Smiley
member
Activity: 182
Merit: 30
LOL. Cool story bruh.  Best fiction I've read all week.

 Roll Eyes

Yep, maxwell says the same thing here how big words keeps him from sleeping at night, says to keep it low and keep it dumb
member
Activity: 182
Merit: 30
2.) add the 8gb bloom filter, note that linux only support 4gb shared memory, so the bloom must be cascaded, note that brain-flayer only used a 512mb bloom, so creating a 8gb bloom is a major task in software engineering

I had PhoenixMiner running with 10gb GPU memory and haven't heard of this limitation. Why map the bloom filter as shared gpu memory in the first place?

Also how are you storing this bloom filter on SSD? Is that what you mean by shared memory, by using something like mmap() on a file?

the blf is just a file, its what it is when loaded in memory that counts

I didn't say SSD, I said  NVME-4x m.2 next to the CPU, are you retarded? To create a blf file study brainflayer the xxd example is all in there its a linux cmd-line tool

If you bothered to write CUDA code and do an alloc, you would know you can only alloc a max chunk about about 700mb, early on I wanted to keep the bloom on gpu, now I know it doesn't matter having done it both ways

study the orginal brainflayer to understand bloom-filters, then 20x that because they only supported 512mb

yes, mmap is shared memory on linux, only 4gb chunks are supported, like I already said, you need to cascade two chunks

If you actually every wrote any cuda source code you know the allocation size, its documented, but you cannot for instance get 8gb contiguous on a 12 gb gpu card, its a limitation of the architecture of that card family your using
full member
Activity: 706
Merit: 111
All talk and no examples of anything, getting tired of all this theory talk.
legendary
Activity: 3472
Merit: 4801
LOL. Cool story bruh.  Best fiction I've read all week.

 Roll Eyes
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
2.) add the 8gb bloom filter, note that linux only support 4gb shared memory, so the bloom must be cascaded, note that brain-flayer only used a 512mb bloom, so creating a 8gb bloom is a major task in software engineering

I had PhoenixMiner running with 10gb GPU memory and haven't heard of this limitation. Why map the bloom filter as shared gpu memory in the first place?

Also how are you storing this bloom filter on SSD? Is that what you mean by shared memory, by using something like mmap() on a file?
member
Activity: 182
Merit: 30
Why is there post here? The mod's seem to think that bitcoin is an alt-coin, go figure.

Two parts here random key search of bitcoin, and ECDLP high-level math analysis to map public-keys to private.

Also going to include the OPENCL code, so people can do both NVIDIA & AMD cards for hacking btc.
Also including brain-flayer 3.0, a complete rewrite that supports up to 4gb bloom-filters ( about 150M bitcoin addresses )

**
https://github.com/room101-dev/Grand-Ultimate-BTC-Hacker/projects [ dozens of components 100's of 1,00's of lines of code, huge requirements for the server - experts only apply ]
https://github.com/btc-room101/bitcoin-rnn [ how to map btc address to private key using ML RNN tensorflow python ]
***

Recently I have re-deployed my RTX-3070's from hacking btc do mining ETH, which is very profiitable now

These old GTX1060-3gb cards are worthless, but just experimenting around I made an interesting discovery.

Normally the 1060-3 would do about 50MH/sec, but when I put the hacking tool on its own NVME m.2. on the MOBO I found that the 8gb bloom filter was almost 100% allowing the GPU to run at max

Now I have 300M addresses from BTC that have value, placed in my 8gb bloom filter gives a false positive of one in a trillion-trillion-trillion, so not often

The 250 MH/sec is how often a private key is created ( more on this later ), and then all possible (comp/uncomp) addresses (h160)  are ran through the bloom-filter, so the through-put is really 250 M & 300M , or almost 1,000 TH/SEC address compares, but really higher because I'm doing 4 different combinations so its 4,000 TH/sec, on just one card, but I'm doing six on a rig, so its 24 PH/sec

This yields about one lost address per week to be found

For the record the rtx-3070 was doing 1,000MH/sec, but I was just running the bloom-filter off of an SSD, but it was shared, I have never tested a standalone NVME with the RTX, I could imagine they would do 1TH/sec easily.

WHat's all this mean? Well in BTC secp256k1 you got 10e77 combos, 1/2 that to birthday you got 10e36, divide that by 10e15, and you get 10e15, that many seconds tells you how often you should have a hit in real time, old calculations were on the order of  one every 1,000 days.

But on a rig I'm see way more often, because I guess for the simple matter is the napkin, is always different than real world.

So what are the tools?

1.) hack up vanity-search, which  is the best gpu solution for generating keys
2.) add the 8gb bloom filter, note that linux only support 4gb shared memory, so the bloom must be cascaded, note that brain-flayer only used a 512mb bloom, so creating a 8gb bloom is a major task in software engineering
3.) hack up vanity-search so that on every new key, its runs through the bloom-filter for every possible address that can be generated, comp, un-comp, h160, eth,...

The Gtx-1060 only uses 970mb of memory, so in effect even the older gtx750 could be used here Smiley

I spent a few years ago spending time putting the bloom-filter on the gtx-1070, but even though there is only 8gb, you can only allocate 700mb max at a time in CUDA dev, so I found near impossible to get a working 4gb bloom running on a GPU, so I fell back to doing it on the CPU.

Note in my rig here I'm just using an 2core i3 ( $50 cpu ), with 4gb memory, as this is essentially 100% gpu

I'm just sharing this cuz well its my hobby, and I have to report.

Lastly, just for the record, like the original brain-flayer dude, I don't spend the coin, I don't even own a single exchange on earth, what I do when I find btc, is I put it in a database, and watch how long it takes be spent after I find it, I have found to date, that almost all coins are gone < 1 yr, which tells me I'm not the only one playing this game.

The selection of random-keys in vanity-search is very critical, of course all that code has to be stripped, as your not looking for prefix, I use a baby-step/giant-step algo that runs through public-keys, where I have 100's of 1,000's every 20 minutes randomly search for a new key from that public hash ( of known high-value ), I do this to make sure that my private key search space is within known valuable addresses, as I'm not looking for a prefix like vanity, I'm looking for a random public address of known value. The 20 min idea, is that time try another, because of scale of the pool, there isn't much duplication.

...

Lastly, on hacking btc, I spent years working on hacking the original public-key coins hold +50 btc, there used to be 1,000's, now there are < 900, and its dropping a few per month, so there being hacked. I was very close to getting a public key mapped to private, using various techniques, but had to move on other projects. Essentially the trick is p is the prime, and n is the order, if you can get n-1, n, or n+1 == p, then you can use a linear discrete log tool (cado-nfs) to get the private-key. The tough nut to crack, is finding the p just close enough to secp26k1 p, to put you in the ballpark for using kangaroo where you need to be within 2^40

I have crack these public high-value keys, but given that I dont' actually touch them, sweep them, its just a mental puzzle and move-on, but the cado-nfs, and finding n+1 is a very difficult & tedious process, of which a few months I get burned out and move onto other projects.

Mapping public-keys ( not hashed pre 2013 bitcoin was not sha'd ) is a real thing, but it takes tremendous knowledge of graduate-math in crypto; The ECDLP is one of the most difficult problems in mathematics; There are many steps, its not like normal hacking; There are no tools, everything is roll-your-own; There is no such thing on GITHUB, as the only thing you'll find there is junk; Like they say, if it made money, they wouldn't be giving it away for free.
Jump to: