I've been reading this for a few hours tonight. Quick question, and this is probably stupid, but once the bloom-filter is created, is the new-block routine high cpu and storage intensive? Reading over the requirements of creating the 8 GB bloom-filter to begin with, I am wondering if this is something I can generate in AWS and then download and host locally.
U really need to quit thinking, and start tinkering.
There are two things here like all this crap. U need a super computer, thread-ripper, 32+gb ram, and lots of NVME's to make your bloom filters and collect your bitcoin addresses and fill your bloom filters.
Then you need mining rigs, one card or a set, that had at least 8gb; The bloom-filter on the mining rigs is kept in an NVME drive, but it gets updated every 12 hours, I keep a task that harvests new bitcoin addresses from the mempool, and then every 12 hours add'd them to the bloom filter, now you could do that on the mother system that made the initial bloom filter, but I find it better to ...
1.) Have one old system that runs bitcoin-core, and electrum-server that hands off bitcoin-addresses from the pool. The electrum server is to add found priv-keys to the wallets on each mining rig. The 'loop2.sh' mining script, has a section where any found bitcoin's are swept into the wallet. Note there are also binchk routines that kick out false-positives from the bloom filter. ( Make sure your old computer has a SSD +1TB for the bitcoin blockchain, otherwise the RPC will be too slow for the mining rigs to harvest addresses&priv-keys)
2.) A super computer to make the bloom-filters and manage the source code (C++/python), I define a super-computer as +32gb RAM, +12 core ripper; +2 NVME 4x16 (128gb ok for bloom, but 1TB for processes )
3.) Mining rigs i3s' 4core are ok, +8gb, while the process 'mining' only uses 900mb on gpu&cpu, when you update the bloom-filter ( which why I only do every 12 hours ), because the code uses share-memory, then the cpu needs 8gb, so 12gb would be best on each miner. ( every 12 hours, because the 300k new address add to the bloom-filter, takes about 30min on the mining rig, and during that time the process speed drops from 400MH/sec to 60mh/sec, so I don't want to do it too often ).
4.) For 'testing' you could just have any old gpu card on the 'super-computer' for validation.
The 'new-block' routine as of right now my way, is to run it on each miner, I tried before running it on the bitcoin-core server, but what happens is the bitcoin&electrum servers ground to a halt, during the bloom-filter update ( which requires +8gb RAM, so the system becomes cache locked ); The thing is I don't want my bitcoin&electrum servers to ever stop, otherwise the entire system fails. ( The mining machines are running new-block, they're expecting a good RPC connection ), the mining routine is also expecting a good electrum-rpc to sweep priv-keys.
So now I have 'new-block', which for those who haven't READ, its a batch-file that calls python to collect all the new addresses from mining-pool every 15min(new block sleep); About 4,000-12,000 new addresses; they're concatentated and 'sorted uniq', of which I see about 10% reduction, so most are new addresses and no-duplicated.
Adding 300,000 new addresses to the bloom-filter on the 'super-computer' ( 32gb, 24core ) is a 5 minute task, but on a mining-rig it can take an hour, which is why I do it on the rigs, if you do it on the bitcoin-core server, then all the RPC calls halt and the entire system suspends.
Probably the best way would be to have the super computer update the bloom-filter every 15 minutes, and then it transfer by "SSH-COPY" the new bloom to the NVME's on all the miners. But I don't do this now, because I used to do it years ago, and now my super-computer has been re-tasked to more interesting projects.
So in summary, normal mining rig with 1+ gpus runnning the miner processing the bloom-filter, on a gtx-1060-3gb I'm now seeing about 400MH/Sec on one card, it was 250MH tops before, the new speedup came from adding more memory to the mining-rig ( asus-370p) was 4gb, now 8gb, I think 12gb would be ultimate. This memory above 1GB is only used during the 'bloom-filter update process', called "hex2blf8 new-address.hex monster8.blf"
...
I don't see how reading can help, there are 12 components, each must be played with and learned both reading the source, and studying IN&OUT mappings, getting the entire system running, requires that all components are working. The problem is most stuff like 'brain-flayer' which is just one engine with cmd-line switches can be understood by reading the 'readme', but here its 12 engine components that must all be working 100% and communicating.
NOTHING is CPU intensive, all computation is done on the GPU's in the mining phase.
creating 8gb bloom filters and updating them requires +8gb RAM, and multi-core cpusAgain, unless you actually stop reading, and start running these tasks,your not going to 'get it', I would suggest one system to begin with, with 32GB of RAM, and lots of NVME drives on pcie-4x12 slots; U also of course need to have another machine running as bitcoin-core full-node txindex, and the electrum server, this machine is non-cpu, and 4gb of ram is fine, just don't have anything else running your bitcoin/electrum server.
...
In summary, in a perfect environment for a professional dedicated to this task. I would have three computers, and thread-ripper with tons of memory to update and create BLF(bloom filter), an old 4gb 6core amd for running bitcoin&electrum server, note be sure to use 1GB LAN to communicate all computers. Then the miners would need just a standard gpu mining MOBO with 4core intel(i3 ok), and +8gb would be ideal for ram, note that each mining rig needs 1 NVME 4x16 pcie near the cpu, at least 128gb, as it only contains this ONE BLF, nothing else, getting high speeds only works if the NVME is only used for this one task of sharing the bloom filter with gpu boards mining/hacking btc.
Right now 'my way' is to use old-computers and un-used mining rigs that I have set aside, the mining rig fetches addresses&and sweep priv-keys from the bitcoin/electrum server. The only thing I have to check is the electrum client to see what the system has found.
Remember I have already worked +5 years on this project, I'm moving on; so my allotment of resources is what works for me, for somebody just starting out, I would do most stuff on the super-computer, because its fast; These days I use all my super-computers for ECDLP math problems,
F*CK AWS, do you realize it would be $5 to download the blf file every 15minutes, are you going to have a full-node of bitcoin&electrum server also running on the AWS? So now your paying AWS what $14,500/month ( just for file downloads), then another $1k/month for super-computer server on AWS, for something you can do on old free computers from the junk store? R U serious, that nobody actually does anything anymore at home?