Pages:
Author

Topic: Vanitygen: Vanity bitcoin address generator/miner [v0.22] - page 16. (Read 1153678 times)

copper member
Activity: 630
Merit: 2614
If you don’t do PGP, you don’t do crypto!
Darkstar has if I'm not wrong 1darkstr or something similar. People recongise those addresses quickly as compared to random ones which are hard to remember

 Cheesy

1DarkStrRagcDjWtsPGxkav4WG3poLXzDS

I'd get 1DarkStar, but that's too long for me to feasibly make or pay someone to get at a reasonable price.

I just want to note, this is NOT a good means to recognize an address.  There are at least 2100254120907352485526230505830591911428096 (5824) addresses which match the pattern ^1DarkStr.+DS$.  Somebody else could easily find a different one to spoof DarkStar_’s address.

I know that this is a real problem with Tor .onion vanity addresses; and I suspect it may be with Bitcoin vanity addresses, too.

A vanity address is good for showing off, and/or making a statement such as with my 35segwitgLKnDi2kn7unNdETrZzHD2c5xh address.  But it is highly insecure as a user interface feature.
legendary
Activity: 2772
Merit: 3284
Darkstar has if I'm not wrong 1darkstr or something similar. People recongise those addresses quickly as compared to random ones which are hard to remember

 Cheesy

1DarkStrRagcDjWtsPGxkav4WG3poLXzDS

I'd get 1DarkStar, but that's too long for me to feasibly make or pay someone to get at a reasonable price.
copper member
Activity: 630
Merit: 2614
If you don’t do PGP, you don’t do crypto!
What I was imagining was that there could be a simple loop in the program.  Start with the "first" private key (...001), go through the various hashing steps and see if you get a public address with the desired pattern.  If not, then increment the private key by 1 (...002) and do the hashes again. That way, the attempted private keys would effectively get "burned" and not be reused.

It's like buying millions of lottery tickets in the same draw to try to cover as many numbers as possible.  You might as well start with 1-2-3-4-5-6 and then 1-2-3-4-5-7 and so on methodically than to choose a bunch of random "pick 6" numbers.  The chance to win is the same for any set of numbers, but there is a slight chance that a "pick 6" could be generated twice, thereby wasting the ticket (i.e. if you win, you would be splitting the jackpot with yourself).  I suppose the "slight" chance is so slight that maybe it doesn't matter.

The entire security of Bitcoin, PGP, TLS/SSL, Tor, disk encryption, and all other crypto using fixed-length keys rests on the premise that the “slight” chance of a collision is impossible as a practical matter.

Think:  The probability of you picking the same key twice is equal to the probability of an attacker randomly picking your key in a bruteforce attack.

Theoreticians use terms such as “negligible probability” because such a thing is possible in theory.  But it will never actually happen that you generate the same key twice, unless your random number generator is so badly broken as to be worse than useless.  Conceptually, think of randomly picking one drop of water from the ocean, then another, and getting the same drop; or randomly picking one grain of sand from anywhere on Earth, then another, and getting the same grain of sand.  2160 is much bigger than that.

Whereas LoyceV speaks truly:

What I was imagining was that there could be a simple loop in the program.  Start with the "first" private key (...001), go through the various hashing steps and see if you get a public address with the desired pattern.  If not, then increment the private key by 1 (...002) and do the hashes again.
A fixed instead of truely random starting point would mean your private key isn't secure. It would mean anyone could reproduce your search and steal your coins.

Note, however, that Vanitygen does try sequential points from a randomly chosen starting point.  (“Sequential” here does not mean linear “1, 2, 3”; rather, it uses elliptic curve point addition.)  It does this for reason of efficiency.  sipa’s keygrinder used in the current development branch of segvan uses similar methods to rapidly generate a great quantity of keys (or optionally, tweaks) from a single random seed.  This can be secure if and only if all seed and key material other than the “winning” key is destroyed and never reused.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
What I was imagining was that there could be a simple loop in the program.  Start with the "first" private key (...001), go through the various hashing steps and see if you get a public address with the desired pattern.  If not, then increment the private key by 1 (...002) and do the hashes again.
A fixed instead of truely random starting point would mean your private key isn't secure. It would mean anyone could reproduce your search and steal your coins.
newbie
Activity: 7
Merit: 4
Think of it this way: you have 3 dice, and you're trying to throw them all at 6 in one throw. The odds of doing this are 1 in 216.
After trying 10, 100 or 1000 times, the odds of throwing it at your next try are still exactly the same.

I can see where the confusion can stem from, people seem to forget that individual rolls have no effect on future rolls as they are mutually exclusive. To calculate the true probability you can't add the probability. A common logical fallacy is people believe if the chance of getting a 6 when rolling a single dice is 1/6 then the probability of getting a 6 in six rolls is = 1/6 +1/6 +1/6 +1/6 +1/6 +1/6 = 1.

You need keep in mind the probability is always fixed to exactly 1/6 every single roll. Doesn't matter you roll non stop or if you roll once a year.

Yup, I got it now. 

What I was imagining was that there could be a simple loop in the program.  Start with the "first" private key (...001), go through the various hashing steps and see if you get a public address with the desired pattern.  If not, then increment the private key by 1 (...002) and do the hashes again. That way, the attempted private keys would effectively get "burned" and not be reused.

It's like buying millions of lottery tickets in the same draw to try to cover as many numbers as possible.  You might as well start with 1-2-3-4-5-6 and then 1-2-3-4-5-7 and so on methodically than to choose a bunch of random "pick 6" numbers.  The chance to win is the same for any set of numbers, but there is a slight chance that a "pick 6" could be generated twice, thereby wasting the ticket (i.e. if you win, you would be splitting the jackpot with yourself).  I suppose the "slight" chance is so slight that maybe it doesn't matter.
copper member
Activity: 630
Merit: 2614
If you don’t do PGP, you don’t do crypto!
Each extra character makes it 58 times more difficult to find.
Note that starting with a Capital can be 58 times faster (depending on which character you use): 1Abcdef or 1ABCDEF are much faster than 1abcdef.

Just two questions about the two points:

  • Why is it 58 exactly? My guess would be: is it something like 26 +26+ 10 = 62 (alphabet sets in caps and regular making the 26 each and the 10 being the number of numbers zero to nice) minus four illegal characters?

Yes, 62 minus four illegal characters.  That equals “58 exactly”.

Old-style (pre-Bech32) Bitcoin addresses use base58, not base-62.  Each character is a radix-58 digit, in the range of [0, 57].  Following are the “digits” used by Bitcoin, from an old code snippet of mine.  Observe that “I” (uppercase i), “O” (uppercase o), “0” (numeral zero), and “l” (lowercase L) are excluded.

Code:
	const char base58[59] =
"123456789" /* 9, [0..8] */
"ABCDEFGHJKLMNPQRSTUVWXYZ" /* 24, [9..32] */
"abcdefghijkmnopqrstuvwxyz"; /* 25, [33..57] */

  • Why are capital letters easier to find as compared to regular numbers and what's like the "math" behind it?

Capital letters are not generally easier to find.  However, at the beginning, they represent a lower number.  Since the large integer being represented is in a range which is not a power of 58, higher digits at the beginning may be rare, or even impossible.

For an analogy:

Imagine that you are searching for a pattern of base-10 digits in a 30-bit base-2 (binary) number.  The number you seek has a range of [0, 1073741823].  Digits [2-9] are impossible in the first position; and digit 1 is only in the first position for 73741824/1073741823 ≈ 6.9% of randomly selected 30-bit numbers.

Here, you are searching for a 192-bit base-2 (binary) number, where the upper 160 bits are uniformly distributed and the lower 32 bits are also uniformly distributed (but dependent on the upper 160 bits).  You are representing that number as a base58 number.  Probability of hitting various base58 digits in the first position is left as an exercise to the reader.





Hmm, I'm guessing this more of a practical real life data rather than actual theoretical analysis?
Yes. The theoretical answer must be somewhere within the hashing algorithm, but that's beyond my understanding.

The theoretical answer is actually not in the hashing algorithm at all, but rather, in how a pseudorandom number uniformly distributed across a binary search space is represented in radix-58 (base58).





If you just restart it, nothing is lost. It just makes a clean random start at another point than where you started before.

What does "nothing is lost" mean?  It went through 12 quadrillion tries before crashing.  Is every try completely random (it doesn't "save" a list of previous attempts or go in some methodical order)? 

LoyceV provided a good explanation by analogy to dice throws.  I have only to add:  This is a probabilistic search.  You could hit your lucky address on the very first try (like winning a lottery).  Considering your previous 12 quadrillion “losses” is actually an instance of classic Gambler’s Fallacy.

The probability of repeating one of those 12q tries is the same as trying an untried one? 

In both cases, the probability is negligible = practically impossible.  12 quadrillion (1.2 x 1016) is a drop in the ocean of a 2160 search space (>1048, more than a thousand quadrillion quadrillion quadrillion).

(N.b. that the search space is of size 2160 although its input is 33 octets for compressed keys and 65 octets for uncompressed keys, and the output is a 192-bit number due to the 32-bit checksum.)
copper member
Activity: 70
Merit: 65
IOS - The secure, scalable blockchain
Think of it this way: you have 3 dice, and you're trying to throw them all at 6 in one throw. The odds of doing this are 1 in 216.
After trying 10, 100 or 1000 times, the odds of throwing it at your next try are still exactly the same.

I can see where the confusion can stem from, people seem to forget that individual rolls have no effect on future rolls as they are mutually exclusive. To calculate the true probability you can't add the probability. A common logical fallacy is people believe if the chance of getting a 6 when rolling a single dice is 1/6 then the probability of getting a 6 in six rolls is = 1/6 +1/6 +1/6 +1/6 +1/6 +1/6 = 1.

You need keep in mind the probability is always fixed to exactly 1/6 every single roll. Doesn't matter you roll non stop or if you roll once a year.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Is every try completely random (it doesn't "save" a list of previous attempts or go in some methodical order)? 
Yes.

Quote
The probability of repeating one of those 12q tries is the same as trying an untried one?
Yes.

Quote
If so, then it wouldn't make any difference if I ran the program for 200 hours straight or for 10 hours on each of 20 days or for 2 mins on each of 6000 days.  Right?
Correct.

Quote
Sorry, I'm sure that question has been asked 12q times.
More or less, yes Tongue
Think of it this way: you have 3 dice, and you're trying to throw them all at 6 in one throw. The odds of doing this are 1 in 216.
After trying 10, 100 or 1000 times, the odds of throwing it at your next try are still exactly the same.
Vanitygen works the same: say the odds of finding it are 50% in 20 minutes. It doesn't matter if it been running for 1 minute or 1 hour, the odds are still exactly the same: 50% for the next 20 minutes.
newbie
Activity: 7
Merit: 4
My searches usually error out after 7 days or so of running at 16Mkeys/sec (~12 quadrillion tries).
If you just restart it, nothing is lost. It just makes a clean random start at another point than where you started before.

What does "nothing is lost" mean?  It went through 12 quadrillion tries before crashing.  Is every try completely random (it doesn't "save" a list of previous attempts or go in some methodical order)? 

The probability of repeating one of those 12q tries is the same as trying an untried one? 

If so, then it wouldn't make any difference if I ran the program for 200 hours straight or for 10 hours on each of 20 days or for 2 mins on each of 6000 days.  Right?

Sorry, I'm sure that question has been asked 12q times.


legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
Hmm, I'm guessing this more of a practical real life data rather than actual theoretical analysis?
Yes. The theoretical answer must be somewhere within the hashing algorithm, but that's beyond my understanding.
copper member
Activity: 70
Merit: 65
IOS - The secure, scalable blockchain
    Is Vanitygen used by bitcoin-payment-processors (like Coinpayments, Bitpay or custom) in backend of websites?
    I mean is it used for generating addresses?

    For the first question, no it's not used by them. Most payment processors don't use any vanity addresses. The answer to your second question would be yes, it is used to generate addresses but a special type vanity addresses. Well not exactly "special" per se but they are addresses which have your (where you stands for the person generating the addresses) choice of word, text, characters used. Say I want my address to start with 1tyrant so it "looks" better.

    Basically it's used to make aesthetically better "looking" addresses. There's no other difference apart from that. You can use it for branding for example or so people recognise it. For example Atriz uses a 1atriz address, Darkstar has if I'm not wrong 1darkstr or something similar. People recongise those addresses quickly as compared to random ones which are hard to remember

    Unfortunately, I don't know exactly. For whatever reason, some prefixes are more likely than others.

    Not all Capitals are faster than their lower case equivalent. For instance, 1Zebra isn't faster than 1zebra.
    It becomes more interesting when searching for "1":
    11ebra takes the same amount of time, 111bra too, but 1111ra takes 58 times longer (but just as long as 11111a and 111111).[/list]

    Hmm, I'm guessing this more of a practical real life data rather than actual theoretical analysis?
    legendary
    Activity: 3290
    Merit: 16489
    Thick-Skinned Gang Leader and Golden Feather 2021
    • Why is it 58 exactly? My guess would be: is it something like 26 +26+ 10 = 62 (alphabet sets in caps and regular making the 26 each and the 10 being the number of numbers zero to nice) minus four illegal characters?
    Correct: 123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz makes 58 characters.

    Quote
    Why are capital letters easier to find as compared to regular numbers and what's like the "math" behind it?
    Unfortunately, I don't know exactly. For whatever reason, some prefixes are more likely than others.

    Not all Capitals are faster than their lower case equivalent. For instance, 1Zebra isn't faster than 1zebra.
    It becomes more interesting when searching for "1":
    11ebra takes the same amount of time, 111bra too, but 1111ra takes 58 times longer (but just as long as 11111a and 111111).[/list]
    copper member
    Activity: 70
    Merit: 65
    IOS - The secure, scalable blockchain
    Each extra character makes it 58 times more difficult to find.
    Note that starting with a Capital can be 58 times faster (depending on which character you use): 1Abcdef or 1ABCDEF are much faster than 1abcdef.

    Just two questions about the two points:

    • Why is it 58 exactly? My guess would be: is it something like 26 +26+ 10 = 62 (alphabet sets in caps and regular making the 26 each and the 10 being the number of numbers zero to nice) minus four illegal characters?
    • Why are capital letters easier to find as compared to regular numbers and what's like the "math" behind it?
    legendary
    Activity: 3290
    Merit: 16489
    Thick-Skinned Gang Leader and Golden Feather 2021
    It took about 8 hours to find 1+6 '1abcdef'. How much longer would it take to find 1+7 or 1+8?
    Each extra character makes it 58 times more difficult to find.
    Note that starting with a Capital can be 58 times faster (depending on which character you use): 1Abcdef or 1ABCDEF are much faster than 1abcdef.
    For general use: try to search for a long line of prefixes at once, that increases your odds of finding one.
    newbie
    Activity: 280
    Merit: 0
    I got everything running. I must have installed SDK v2.4 incorrectly, maybe skipped a step. Re-installation fixed the issue.
    ./oclvanitygen produced 25 Mkeys/s on the GPU. Thats like 200x more key searches than on my CPU.   Shocked
    It took about 8 hours to find 1+6 '1abcdef'. How much longer would it take to find 1+7 or 1+8?
    legendary
    Activity: 3290
    Merit: 16489
    Thick-Skinned Gang Leader and Golden Feather 2021
    My searches usually error out after 7 days or so of running at 16Mkeys/sec (~12 quadrillion tries).
    If you just restart it, nothing is lost. It just makes a clean random start at another point than where you started before.

    Quote
    Are there settings that reduce the amount of GPU that is being used
    You can reduce the number of threads it uses:
    Code:
    ./oclvanitygen -t 10 1testt
    Difficulty: 15318045009
    [868.99 Kkey/s][total 34037760][Prob 0.2%][50% in 3.4h]
    I highly doubt this will help though: it slows down my system just as much as when I run oclvanitygen at full speed. Unless your "out of resources" means out of memory, in that case less threads could help.

    Alternative: set it to save results to a file, and set it to restart after it crashes.
    newbie
    Activity: 7
    Merit: 4
    My searches usually error out after 7 days or so of running at 16Mkeys/sec (~12 quadrillion tries).   Are there settings that reduce the amount of GPU that is being used -- maybe it is busting the buffer ("out of resources")?  What does the "grid" attribute do?  I've tried without it, at 1024x1024, and 2048x2048.  Doesn't appear to make a difference.  Any ideas? Thanks.

    clWaitForEvents(NDRange,1): CL_OUT_OF_RESOURCES
    clEnqueueMapBuffer(4): CL_INVALID_COMMAND_QUEUE
    ERROR: Could not map row buffer for slot 1
    ERROR: allocation failure?


    ---------------------
    c:\xxx>oclvanitygen.exe -D 0:0,grid=2048x2048 -v -k -f 2patterns.txt -o 2matches.txt
    Loading Pattern #3: 1ZZZZZZZ
    Prefix difficulty:       51529903411245 1XXXXXXX
    Prefix difficulty:       51529903411245 1YYYYYYY
    Prefix difficulty:       51529903411245 1ZZZZZZZ
    Next match difficulty: 17176634470415 (3 prefixes)
    Device: GeForce GTX 680M
    Vendor: NVIDIA Corporation (10de)
    Driver: 369.09
    Profile: FULL_PROFILE
    Version: OpenCL 1.2 CUDA
    Max compute units: 7
    Max workgroup size: 1024
    Global memory: -2147483648
    Max allocation: 536870912
    OpenCL compiler flags: -DPRAGMA_UNROLL -cl-nv-verbose
    Loading kernel binary 04c59513592276694f1b58b9124bba9c.oclbin
    Grid size: 2048x2048
    Modular inverse: 4096 threads, 1024 ops each
    Using OpenCL prefix matcher
    GPU idle: 2.75%
    [16.00 Mkey/s][total 11653961744384][Prob 49.3%][50% in 4.4h]                  clWaitForEvents(NDRange,1): CL_OUT_OF_RESOURCES
    clEnqueueMapBuffer(4): CL_INVALID_COMMAND_QUEUE
    ERROR: Could not map row buffer for slot 1
    ERROR: allocation failure?
    c:\xxx>
    ---------------------
    full member
    Activity: 198
    Merit: 130
    Some random software engineer
    Does the vanitygen searches for each prefix digit 1 by 1 or the entire prefix, I'm trying to see how does the vanitygen does its searches.

    vanitygen has its source code available to understand how it works: https://github.com/samr7/vanitygen
    In short, it generate a random private key, from which it will generate a batch of parameters to get a list of public keys that it will hash using sha256 & ripemd160 according to the Bitcoin protocol, and if address matches pattern, it stops. If not, it continues, regenerating a new private key once a while.
    It can not search 1 by 1 because of the nature of the hashing algorithm.
    full member
    Activity: 1204
    Merit: 220
    (ノಠ益ಠ)ノ
    is there a solution for high sierra without cuda drivers?) can't compile current https://github.com/exploitagency/vanitygen-plus
    legendary
    Activity: 3290
    Merit: 16489
    Thick-Skinned Gang Leader and Golden Feather 2021
    Can you try it with less options to start with?
    Code:
    ./vanitygen -f addr.txt -k # or the .exe alternative of course)
    I've used several variations on this, and they work fine. Flags -r and -C don't work for me, so I omit them.
    Pages:
    Jump to: