4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
It is a rig of 8 gtx 1060 6gb and yes B stand for billion. I have 2 of that type and right now I am building one with 2080 super.
Since we are now in the billions, we should also use x Bk/s, because 4000 Mk/s is shit. I better write 4 Bk/s. With my new rig I can expect about 10 Bk/s.
However I have significant speed loss if I am searching for a large number of prefixes. With about 10k prefixes I am down to 700 Mk/s, but I just parsed some data and want to search for 10M prefixes. I dont know where the speed will be with so many prefixes. In oclvanitygen there were no impact on the speed - doesnt matter if you are looking for 10 or 10M prefixes. But vanitysearch is still 5-6 times faster with so many prefixes.
I was thinking of doing an FPGA, but with this much speed I dont need an FPGA. I probably just put more rigs on this task.
I assume you target a very specific set of addresses (as we all do). Not trying to be a spoilsport here, but
I did some math and here is what we have:
10B/s
100B/10 s
1T/100 s
10T/1000 s
100T/10000 s
1 Quadr/100 000 s
10 Quadr/1 000 000 s
100 Quadr/ 10 000 000 s
1Quint/100 000 000 s or 27,777 h or 3.2 years
10Quint/1 000 000 000 s or >30 years
100 Quint/10 000 000 000 s or >300 years
1 grain of sand contains roughly 40 quintillion atoms or 4.33×10*19
the whole universe contains 10*80 atoms which is almost equal to the
number of all bitcoin addresses.
so in 30 years you're only able to check 1/4 of a grain of sand when you need
to check the whole universe in order to find those addresses.
so basically me using my laptop which produces 100M/sec and you using
your x14 times more powerful equipment, we still have almost the same chance
because of such huge numbers.
sorry if this is off topic but I think it can give people a good and clear view of what
we are dealing here with