Pages:
Author

Topic: Large Bitcoin Collider (Collision Finders Pool) - page 58. (Read 193404 times)

sr. member
Activity: 480
Merit: 250
find something on the Collider is a better chance than solo mining Grin
legendary
Activity: 1120
Merit: 1037
฿ → ∞
I've added a new stat to http://lbc.cryptoguru.org:5000/stats

Quote
The effective search space until something (except bounties) is found is 136.75 bits. Given current search speed, the probability to find an address with funds on it within the next 24h is 0.0000000000000000004524764939984744324231018884855317923702%.

2 hours later...

Quote
The effective search space until something (except bounties) is found is 136.75 bits. Given current search speed, the probability to find an address with funds on it within the next 24h is 0.0000000000000000004556237169019694490259313869322519435127%.

Now you may think that number is ridiculously small. For me, it is surprisingly high. First off, before the project started, it was 0, but let's assume it was
0.0000000000000000000000000000000000000000000000000000000001%. Then today, when I started this stats computation, the probability of a collision within the next 24h is 4524764939984744324231018884855317923702 times bigger than when the project started.

You still may think the number is small. Even if nothing happens and the pool remains at its current capacity, this number will grow. Because that's actually the effect you can observe in the "2 hours later" probability. If the address generation capacity of the pool rises (better clients x more clients), this number will start to exhibit geometric growth.


Rico
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Most people have used the client with the implicit/default "auto" mode for search page assignment.

I have just added this to the README FAQ which will be in the next release:
(next release will allow to run on small Amazon EC2 instances and finally, finally the Windows client is stable!)


Code:
Q: What if I want to check a specific range on directory.io?

A: Each page on directory.io lists 128 private keys and the
   corresponding uncompressed and compressed addresses. The LBC checks
   in blocks/pages of 20bit size (1048576 PKs) and is therefore like
   checking 8192 pages on directory.io
   Say you would like to use LBC to check all that is on the pages
   569716666483 to 569716830323 on directory.io, you would call
   LBC -p 69545491-69545512 -c 0
   because 69545491 = int(569716666483/8192) and
           69545512 = int(569716830323/8192) + 1
   Takes a little over 2 minutes on a modern notebook and in fact
   you have checked a bigger range 569716662272-569716834304
   (172032 pages) on directory.io

So basically, you would like to check - just for fun of course - some 100000 pages on directory.io for addresses with funds on them. You cannot do that by clicking around, unless you're a real immortalist type person. Also wget-ing doesn't even qualify as snail attempt and the LBC "auto" mode doesn't quite cut it.

Enter the above FAQ:

Because every block checked in LBC is exactly as big as 8192 pages on directory.io, all you have to do is to compute the lower and upper bound of your search interval by computing

Code:
lbc_from = int(directory.io_from/8192)
lbc_to   = int(directory.io_to/8192) + 1

and then just start LBC with

LBC -p lbc_from-lbc_to -c

Et Voila! - you may check hundreds of thousands of pages against millions of addresses with funds on them within minutes.

A little warning:

Some people have done this already with insane numbers/ranges. This is a shortcut to the client-blacklist. Count and think before you
enter some numbers there. If you have trouble with counting and thinking, also wait for the new release, which does that for you and limits max. work to 1 day.


Rico
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Attention! Windows version crashes if you try to use more than 1 CPU. And it also crashes if you want to use 1 CPU for longer periods  Roll Eyes Workaround: start in as many windows as you have/want-to-use CPUs
Code:
perl LBC -c 1 -t 1
. It's super-paranoia and super-annoying, but at the moment your best bet to participate with Windows.

I'm figuring out better ways around this situation.


Rico
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Yay, downloading the new version now!

3 times faster!!! Woo!

Just curious, what did you do to get 3X the performance?

Basically I got rid of the whole base58 processing. The generator now simply creates a hash160 from uncompressed and compressed public keys.
The missing base58 munging, four SHA256 computations less per PK and the constant length of hash160 contrary to base58 (which allowed some more optimizations) summed up pretty good.

I still believe more can be done, especially with GPUs - but it's still an open quest.

Rico
legendary
Activity: 1140
Merit: 1000
The Real Jude Austin
Hi all,

this is a rather large announcement and I am very excited about this. So consider all of the following text in ALL CAPS, bold, with emphasis and underlined.  Smiley

I have pushed out the new version 0.823 of the client for Linux and Windows, 64 and 32 bit. This version is a significant step towards serious collision attempts as it:

  • is about 3 times faster than the previous version!
  • however, it uses less space in memory and on disk
  • has the "persistent-found" feature as suggested by Jude Austin

The pool also offers a new bounty (1pdSSfCx4QynTwXTtVDjEEavZ4dDnYdhP) and with current search speeds it is within comfortable reach of the pool.

The pool will also have searched the equivalent of 3 billion pages on directory.io anytime soon now. Imagine clicking through that!  Cheesy



Rico


Yay, downloading the new version now!

3 times faster!!! Woo!

Just curious, what did you do to get 3X the performance?

legendary
Activity: 1120
Merit: 1037
฿ → ∞
Hi all,

this is a rather large announcement and I am very excited about this. So consider all of the following text in ALL CAPS, bold, with emphasis and underlined.  Smiley

I have pushed out the new version 0.823 of the client for Linux and Windows, 64 and 32 bit. This version is a significant step towards serious collision attempts as it:

  • is about 3 times faster than the previous version!
  • however, it uses less space in memory and on disk
  • has the "persistent-found" feature as suggested by Jude Austin

The pool also offers a new bounty (1pdSSfCx4QynTwXTtVDjEEavZ4dDnYdhP) and with current search speeds it is within comfortable reach of the pool.

The pool will also have searched the equivalent of 3 billion pages on directory.io anytime soon now. Imagine clicking through that!  Cheesy

edit: As of tonight (2016-09-14 CET), the pool has searched one trillion addresses.

Rico
legendary
Activity: 2800
Merit: 1012
Get Paid Crypto To Walk or Drive
Sweet to see you have a Windows version out.  I will try to get it tried out this weekend, but I may be a little busy since tax day is coming up, in that case I will try next week at some point.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Maybe I am being super duper optimistic but on the Statistics page of your website there should be a column for "Collisions" next to "Blocks done" for the client ID.

The clients stats will be extended (like client speed etc.), but for the "found" information there will be a separate page. LIke "What has the pool found so far, when which client had the hit, type of find (bounty/true collision) etc.

Quote
Also, could you add an option to LBC to write any collisions to a text file? This way if my computer crashes, power goes out, etc etc then I don't lose the "Collision".

Great idea. I will add that - as default behavior.

edit: although I believe some Linuxers at least solve that by doing something like ./LBC -c x -t blah | tee file.txt
but I will add it nonetheless.

Quote
I would add it but I absolutely hate Perl and refuse to even look at it, haha.

 Cheesy I have the same feelings for Python and PHP.

Seriously - if someone has ideas what to do, I will be happy to pick them up.


Rico
legendary
Activity: 1140
Merit: 1000
The Real Jude Austin
Maybe I am being super duper optimistic but on the Statistics page of your website there should be a column for "Collisions" next to "Blocks done" for the client ID.

Also, could you add an option to LBC to write any collisions to a text file? This way if my computer crashes, power goes out, etc etc then I don't lose the "Collision".

I would add it but I absolutely hate Perl and refuse to even look at it, haha.

Thanks,
Jude
legendary
Activity: 1120
Merit: 1037
฿ → ∞
May I ask, do you guys store private keys on a server? Because the chance of a collision is currently so miniscule, but if bitcoin picks up, someone generating and using an address in the future it's much more probable.

There is no way the server could keep up with the generated private keys. All PK generation and checking occurs on the client and after the client has checked the keys, they are discarded. All the server does is distributing chunks of work to the clients, receiving ACKs for work done and performing multi-interval arithmetic to ensure nothing is left out and nothing is done (unnecessarily) twice.

The main reason for the offline processing is speed and scalability - of course. But I also like to leave the decision what to do with a PK found in the clients hands to not get involved in case the client decides to ... hmm ... reward himself.

Right now, the pool is just getting warm. There are still few clients and - to be honest - the pool is not yet searching in the optimum keyspace (and yes: I know where that is  Cool )

As for the pk storing: I do such a thing too, see this german post: https://bitcointalksearch.org/topic/m.15941399
But it's only an experiment for efficient PK storage and the server I run the experiment on has to recompute the PKs again - these from the pool cannot be reused. (Well - actually I plan to have a feature in the LBC client where it will also store the generated keys on disk instead of discarding them after checking... but time oh time... So theoretically I could let run a LBC client on the PK storing machine and kill two birds with one stone.)

Rico
sr. member
Activity: 378
Merit: 250
May I ask, do you guys store private keys on a server? Because the chance of a collision is currently so miniscule, but if bitcoin picks up, someone generating and using an address in the future it's much more probable.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Windows x86 clients for 64it and 32bit available

Also some client contributor stats at http://lbc.cryptoguru.org:5000/stats


Rico
legendary
Activity: 1120
Merit: 1037
฿ → ∞
I wonder if a pattern in private keys could be found using machine learning?

Just feed a list of known private keys/addresses and see if it can find a pattern?

What do you think?

To quote you: "I had the same idea"


Nothing seems better suited than hashing to provide a perfect training set for neural networks. Lots of outputs (hashed value - input for the NN) and their respective inputs (in that case output for the NN) .... and then give it a new set to find inputs (NN output).

However, I think that this idea has already been tried and SHA256 (and probably RIPEMD160 too) looks like noise to the NN. So you get ... noise back.

I do have cuDNN here, so I could try it in practice, but I won't come around to it until October.

Rico
legendary
Activity: 1140
Merit: 1000
The Real Jude Austin
I wonder if a pattern in private keys could be found using machine learning?

Just feed a list of known private keys/addresses and see if it can find a pattern?

What do you think?
member
Activity: 105
Merit: 59
Do you really think a shift vs. an increment is that much of a difference?

I'd bet, that a more efficient SHA256 and/or RIPEMD160 implementation makes tons of CPU cycles difference and the shift/increment is negligible compared to that.

I was told that doubling is more efficient than incrementing by gmaxwell, and I am planning on testing that with libsecp256k1 for one of my other projects soon (maybe this weekend?).

You can speed up ripemd160 a little by using fixed padding for a 256 bit input. I am not aware of a good x86_64 assembly implementation of ripemd160, but this could probably speed things up a little more. Profiling the code might be worth while.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Hmm. Looks like you're right, though it does a batch conversion of point format. I should try to add that optimization to brainflayer. Doubling the key rather than incrementing it should still be faster, though.

Do you really think a shift vs. an increment is that much of a difference? I'd bet, that a more efficient SHA256 and/or RIPEMD160 implementation makes tons of CPU cycles difference and the shift/increment is negligible compared to that.

Also, handling the big integer numbers (potentially up to 2256) seems to take its toll. At least I observe a significant penalty for 32bit systems.


Rico
member
Activity: 105
Merit: 59
It actually does exactly what we do: It simply chooses a private key and then increments it. IIRC the docs vanitigen 1 million times, oclvanitygen 100 million times.

Hmm. Looks like you're right, though it does a batch conversion of point format. I should try to add that optimization to brainflayer. Doubling the key rather than incrementing it should still be faster, though.
legendary
Activity: 1120
Merit: 1037
฿ → ∞
Your README makes no indication of source being available, and I didn't want to download the whole archive to look.

It seems either we are talking about different READMEs then, or we at least have different text understanding traits.

Code:
Q: Is this software secure?

A: If you have a genuine version - yes. To make sure, never download
   anything that claims to be LBC from any other source than
   http://lbc.cryptoguru.org:5000/download
   If you want to be extra-sure, check the md5sums at
   http://lbc.cryptoguru.org:5000/downloads/LBC-client/md5sums
   for the MD5 sums of all relevant files. On your command line,
   verify the files by doing
   > md5sum "filename"

Q: No, I mean can I trust *you*?

A: Send me 100BTC and I will send them back to you. After this, answer
   the question for yourself. The LBC is compiled Perl source - it's
   scattered, but ultimately you can look at it in the text
   editor. The generate binary is a derivative of
   https://github.com/saracen/bitcoin-all-key-generator
   with just added command line parsing for block offsets.
   Other than that, observe the LBC thread(s) on bitcointalk.org
   for any complaints. If in doubt, don't use the software.

Ah brainflayer... I have played with it in the past and I will certainly look at the talk about sequential search.
Right now it seems like a nice exercise in bloom filter application, but at the moment I'm unable to see it's use for any of my projects.

Quote
I think I get around 550k/sec on my i7-2600 running on all cores. I always simply include all addresses seen on the blockchain regardless of whether they've got a balance.

Currently I get on 4 cores of my notebook 8M/min (~ 133k/sec) and checking only against addresses with funds, which is slower than your 550, although I wouldn't say a lot. However, I believe, a single modern CPU core should be capable of generating and testing around 500k/sec, so that is my goal. Not to speak of GPU...

Unfortunately my C is rusty at best, assembler virtually nonexistant, so while I would like (and actually have) hack something togther in C, it was even slower than the Go implementation. I'd love to use the Intel SHA256 implementations (http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/sha-256-implementations-paper.html), but right now I'm not up to it.

Right now I'm busy providing clients for different OSes and architectures, which has the nice side effect that you will be able to plug in your own key generators.

Quote
Vanitygen uses some techniques to generate addresses without computing individual private keys...

It actually does exactly what we do: It simply chooses a private key and then increments it. IIRC the docs vanitigen 1 million times, oclvanitygen 100 million times.

Quote
...though I think it would be a waste of energy to run that.

 Smiley We'll see.

Rico
member
Activity: 105
Merit: 59
Oh, also, while I'm commenting about this, I'll mention that if you want to do a massive private key search, you may also want to search for transaction nonces as well. I think that should at least triple your chances of finding something, though the odds are still absurdly small.
Pages:
Jump to: