Author

Topic: How much entropy is lost by searching for a '01' prefix SHA512 output (Read 991 times)

legendary
Activity: 1896
Merit: 1353
I understand, but I am talking about the theoretical implications not the practical ones.

no you don't. you do not understand what entropy is.
I have tried to explain, now I give up.
jr. member
Activity: 32
Merit: 1
Yeah but 2^124 is pretty much impossible these days.

Only way to go thru all those hashes are with GPUs.

2.1267647932558653966460912964486e+37 hashes in total.

Most GPUs have a Teraflop of 5-10 these days, however with the extra steps it would be slower so say each GPU hashes at 1TH/s.

Even with 1,000,000 GPUs hashing at 1TH/s.

It would take

21,267,647,932,558,653,966.460912964486 seconds

Which is 674,392,691,925.37588681065807218688 years....

With that amount of hash power you are better off mining ZEC or ETH and you would at least get a gauranteed profit of $1,000,000 - $1,500,000 daily.

ok, I hope this is my last post here.

So the easiest path to crack an Electrum private key is to just run through the 2^124 permutations, that is the shortest route.

That is precisely the point you are not getting.
How do you think an attacker can "run through" these 2^124 permutations?
Please try to focus on this question, and forget the rest.

First, let us agree that these are not "permutations".
In mathematics, a permutation is a bijective function between two sets.
However, the attacker does not have a simple function that takes integers up to 2^124 and maps them to the set of seeds accepted by is_new_seed().
So let us not talk about "permutations", but about "valid seeds".

So, how would an attacker "run throught" these 2^124 valid seeds?

The only way he can do that is to test all seeds, and to filter out the ones that are not valid.
That means the attacker has to enumerate a set of 2^132 seeds.


I understand, but I am talking about the theoretical implications not the practical ones.

The fact of the matter is that the minimum entropy is 124 bits, and probably lower due to collisions.

According to the cryptographers that I have asked, 1 bit is lost at every layer, especially if the "glass is full" as that is when the collisions start to appear.

So the RIPEMD is the weak link here, and possibly the ECDSA if spent.

There is 1 bit lost assuming that the attacker has 50% probability of guessing a bit, and possibly more if he is more lucky.

So let's assume 120 bit of security, and that is only if no new attack vectors appear, that make the cracking of these algos faster.




Now 120 bits is not quantum secure. And maybe if the RNG of the users is weak, probably less. It may still be unfeasible to crack it, but the danger is there.

The private key becomes exposed to danger, and it will be a question of when, not if, the attacker finds it.


By the way, the attacker doesn't have to target you specifically, there are already people going over all private key combinations as we speak. And this will only get worse, in the future.

https://www.youtube.com/watch?v=foil0hzl4Pg
legendary
Activity: 1896
Merit: 1353
ok, I hope this is my last post here.

So the easiest path to crack an Electrum private key is to just run through the 2^124 permutations, that is the shortest route.

That is precisely the point you are not getting.
How do you think an attacker can "run through" these 2^124 permutations?
Please try to focus on this question, and forget the rest.

First, let us agree that these are not "permutations".
In mathematics, a permutation is a bijective function between two sets.
However, the attacker does not have a simple function that takes integers up to 2^124 and maps them to the set of seeds accepted by is_new_seed().
So let us not talk about "permutations", but about "valid seeds".

So, how would an attacker "run throught" these 2^124 valid seeds?

The only way he can do that is to test all seeds, and to filter out the ones that are not valid.
That means the attacker has to enumerate a set of 2^132 seeds.
legendary
Activity: 3808
Merit: 1723
Yeah but 2^124 is pretty much impossible these days.

Only way to go thru all those hashes are with GPUs.

2.1267647932558653966460912964486e+37 hashes in total.

Most GPUs have a Teraflop of 5-10 these days, however with the extra steps it would be slower so say each GPU hashes at 1TH/s.

Even with 1,000,000 GPUs hashing at 1TH/s.

It would take

21,267,647,932,558,653,966.460912964486 seconds

Which is 674,392,691,925.37588681065807218688 years....

With that amount of hash power you are better off mining ZEC or ETH and you would at least get a gauranteed profit of $1,000,000 - $1,500,000 daily.
jr. member
Activity: 32
Merit: 1
Here is a visualization of Electrum's key/address system. Think of the hash functions as glasses, and think of the entropy as wine.

If you pour 124 bits of wine into the system, you only get 124 bits of security, even though some glasses of wine have more capacity, if you only pour 124, you only get 124. The maximum security the glasses can handle is 160 bits, if you don't reveal the public key.

So it's obvious that the hacker will try the 2^124 permutations, which is still unfeasible, however once quantum computers come out, every "drop of wine will count"


I hope now it's easier to understand.  Smiley


Edit: sorry I confused 4 with 8, now it's fixed
jr. member
Activity: 32
Merit: 1

The problem is that you do not understand what entropy is. I think there is no point continuing this discussion.

Btw, raising the num_bits parameter to 132 would have no effect at all; math.ceil() already ensures that n is a multiple of 11 bits.



Let me explain it another way if I may, because it seems that you are the one who is confused here.


SHA512 hash function =/= 512 bits of security by default

It only means a maximum of 512 bits of security, if the input entropy is 512 bits.



So if you feed 1 bit of entropy into a 512bit hash function, it will only have 1 bit of security.


There are 3 ways the attacker can proceed cracking it:
1) By brute forcing the hash function 2^512 permutations
2) By finding a shortcut/vulnerability in the hash function
3) By looking at the input function and brute forcing that.


-Now for this example we will ignore point 2).
-The attacker will also not try to brute force 2^512 permutations.
-So what he will do is just run the input permutations.


WITH A DICE

So if you have an entropy of a dice, and the attacker knows it's a dice, then you have 6 values with 2.58 bits of entropy
If you hash a random number from 1-6, the hash won't have 512 bit of security, only 2.58 bit of security.
The attacker will obviously not go through 2^512 bits, it will only go through 2^2.58 bits, which is 6 values.



WITH ELECTRUM

In electrum you have an is_new_seed function, that steals away 8 bits of entropy, so it doesn't matter if you wrap it around a SHA512 function, because the BTC protocol and the electrum source code is public just as with the dice above, that function will only have as much security as the entropy. Just as in the analogy above, the attacker knows that it's a dice, so here the attacker knows that it's a is_new_seed function.

So according to my calculations, we have a 124bit entropy, therefore, at all layers, the maximum security is only 124bits, even if it's packaged into a RIPEMD-160 and the public key is not known.

So the easiest path to crack an Electrum private key is to just run through the 2^124 permutations, that is the shortest route.

While people who use minimum 160 bit input entropy can enjoy 2^160 permutations of security with an unspent BTC address.



Do you understand now my analogy? I don't try to be cocky or insulting, I just believe that this is a security issue in Electrum that needs to be resolved. Smiley

jr. member
Activity: 32
Merit: 1
I perfectly understand your argument.

Yes, the number of valid seeds is shrunk.
But that does not matter, because an attacker still needs to enumerate all seeds, in order to know if they are valid.
So, we are not reducing the size of the haystack.

It is as if you were claiming that the number of possible combinations is one because in the end there is only one seed that matches the private keys.
With that kind of reasoning, the entropy of anything is zero.

The problem is that you do not understand what entropy is. I think there is no point continuing this discussion.

Btw, raising the num_bits parameter to 132 would have no effect at all; math.ceil() already ensures that n is a multiple of 11 bits.



It's a hard mental excersize, and this does matter, maybe the missing bits are not as much, but it's still not good to misrepresent it.

For me security is important, hence my name, so let me explain it in a simple way.


THREAT MODELING


There are 5 layers in the Electrum Wallet:

  • 1. Attacking the Bitcoin Address
  • 2. Attacking the Public Key
  • 3. Attacking the bip32_private_derivation output
  • 4. Attacking the bip32_root output
  • 5. Attacking the is_new_seed output
  • 6. Attacking the seed


6. If the attacker has the seed, then it's already over.

5. The is_new_seed output has 512bit security, so even if it's made public, which it has no reason too, it's theoretically safe.
4. The  bip32_root output has 512 bit security, so it's the same as above
3. bip32_private_derivation is just an encoding mechanism so, there is no entropy change here, and it's as vulnerable as the seed
2. The public key has a maximum security of 128 bits if made public
1. The bitcoin address has a maximum security of 160 bits if made public/ or is already public


Now as you have seen, the 6. and 4. points have 0 security, but they are not made public anyway, it's just handled internally, or on an offline machine if done so.

So the only thing that goes public is the public key and the bitcoin address. In case of a bitcoin address, if it's unspent it remains 160 bit, if it's spent, then 128 bit.




So far so good. But my point was that the input entropy is too low, due to flawed generation.

Think of it like a car engine, if you feed only 124 bits of entropy into it, the output will only be maximum 124bits (and possibly lower due to collisions).


So it doesn't matter that we have 512bits shield at point 5) and point 4), those values are private anyway, because it's handled in the memory.


What I am saying is that the is_new_seed function is lowering our entropy by 8 bits regardless of what we feed into the make_seed function.

I don't know how else to explain it to you, just test out my code and see for yourself:
https://bitcointalksearch.org/topic/m.17802165




Btw, raising the num_bits parameter to 132 would have no effect at all; math.ceil() already ensures that n is a multiple of 11 bits.


You still don't understand what I am saying to you. Please read my post carefully.

legendary
Activity: 3808
Merit: 1723
So say someone has 1000 GPUs and each GPU hashes at 1GH/S = 1000TH/s of hashing power.

If the entropy is 2^64 = 18,446,744,073,709,551,616 hashes.

So 18,446,744.073709551616 seconds to find all seeds ? About 213 Days ?

Or should it be 16^64 ?


legendary
Activity: 1896
Merit: 1353
I perfectly understand your argument.

Yes, the number of valid seeds is shrunk.
But that does not matter, because an attacker still needs to enumerate all seeds, in order to know if they are valid.
So, we are not reducing the size of the haystack.

It is as if you were claiming that the number of possible combinations is one because in the end there is only one seed that matches the private keys.
With that kind of reasoning, the entropy of anything is zero.

The problem is that you do not understand what entropy is. I think there is no point continuing this discussion.

Btw, raising the num_bits parameter to 132 would have no effect at all; math.ceil() already ensures that n is a multiple of 11 bits.

jr. member
Activity: 32
Merit: 1
Again, you are wrong.

Just imagine for a second that the prefix passed to is_new_seed() is no longer 8 bits long, but 132 bits long.
Imagine, for the sake of the argument, that I have a seed that passes this test; its hashes starts with the 132 bits prefix required by is_new_seed().
That seed, by the way, was generated by 12 words randomly chosen from a 2048 dictionary.


Your misunderstanding refers to the hash function, but I was not referring to that.

The hash function's output is 512 bits in the seed checking phase, but that is meaningless, because that hash string remain confidential, and even if it's exposed it doesn't matter, because thr 512 bit function masks the input entropy.

The entropy loss is due to the fixation of the 2 bits, and it has nothing to do with the hash function. The hash function could be a SHA1 for that matter.

But if we restrict the first 2 bits to be a fixed value '01' , then that loses us 8 bits of entropy that I have proven experimentally above.

And it is an issue, since a brute force attacker has 319014718988380000000000000000000000000 less permutations to go through.

So it does matter how you encode the seed version, because by shrinking the haystack to find combatible strings that start with '01', we are losing entropy, as the haystack is smaller, and the attacker can find the needle faster.
jr. member
Activity: 32
Merit: 1
I suggest raising the num_bits=128 to 132

Code:
    def make_seed(self, seed_type='standard', num_bits=132, custom_entropy=1):


https://github.com/spesmilo/electrum/blob/master/lib/mnemonic.py



So the bwp and num_bits line further increases it to 136,

Code:
        bpw = math.log(len(self.wordlist), 2)
        num_bits = int(math.ceil(num_bits/bpw)) * bpw



And by losing 8 bits entropy, we will have 128 bits as de facto entropy if that is the goal. Because currently the defaultly generated seed has only 124 bits of entropy instead of 128.



jr. member
Activity: 32
Merit: 1
Again, you are wrong.


I am sorry to say this, but I have to say it bluntly, that it's you who is wrong.

No disrespect, I like your work, and support it 100%, it's just that I think you misunderstood my argument.

I can, and have proven this experimentally, there are 2 ways to prove it:

1) TEST HEX CHAR BITS

Code:

#!/usr/bin/env python
import hashlib
import binascii
import math
filter_bit=0
total_bit=0

def byte_to_binary(n):
    return ''.join(str((n & (1 << i)) and 1) for i in reversed(range(8)))
def hex_to_binary(h):
    return ''.join(byte_to_binary(ord(b)) for b in binascii.unhexlify(h))



#### total permutation =  2 ^ nestsize


for  a in range(2): #1bit
  for b in range(2): #2bit
    for c in range(2): #3bit
      for d in range(2): #4bit
       for e in range(2): #5bit
for f in range(2): #6bit
for g in range(2): #7bit
  for h in range(2): #8bit

        tupstr = str(a)+str(b)+str(c)+str(d)+str(e)+str(f)+str(g)+str(h)
hashed = hashlib.sha512(tupstr).hexdigest()
hashed = hex_to_binary(hashed)
total_bit+=1
if hashed[0] == "0" and hashed[1] =="1":
filter_bit+=1


#####################################

print "Filtered: "+str(filter_bit)
print "Filtered Entropy: "+str(math.log(filter_bit,2))
print "Total: "+str(total_bit)
print "Total Entropy: "+str(math.log(total_bit,2))


It can be easily proven that 1 hex char = 4 bits, and the experiment result always in 2 hex bit loss if we fix the first 2 characters.

Therefore that is 2*4 =8 bit entropy loss.  (because 1 hex character is 4 bits long)

This is exactly how Electrum fixed the first 2 hex bits which is equivalent to 8 bit entropy loss in Electrum.

2) SEED TEST - MONTE CARLO SIMULATION

Code:
# ADD these to mnemonic.py

container = []

    def test_make_seed(self, num_bits=128, prefix=version.SEED_PREFIX):
        # increase num_bits in order to obtain a uniform distibution for the last word
      #  bpw = math.log(len(self.wordlist), 2)  # 11.0
     #   n = int(math.ceil(num_bits/bpw)) * bpw # 132.0
     #   print_error("make_seed", prefix, "adding %d bits"%n)
n = 11
        my_entropy = ecdsa.util.randrange(pow(2, n))
        nonce = 0
        while True:
            nonce += 1
            i = my_entropy + nonce
            seed = self.mnemonic_encode(i)
            assert i == self.mnemonic_decode(seed)
            if is_new_seed(seed, prefix):
                break
        print_error('%d words'%len(seed.split()))
        return seed

    def searchf(self):
          seed=self.test_make_seed()
          container.append(seed)
          for nonce in range(0,500):
            seed=self.test_make_seed()
    n = len(container)
            test="T"
    for pp in range(0,n):
              if container[pp]==seed:
                 test="F"
                 break
            if test == "T":
               container.append(seed)


Mnemonic().searchf()
print container

We fix the bits to 11 bits (so the seed is 1 word wrong for simplicity)

And the container loops through all seed variations for 11 bits total, and hopefully in 500 iterations it goes through all of them like a monte carlo simulation (it does).

And we will only get 8 seeds, which means that 2^11 input only yields 2^3 output, because 11-3 = 8.

We are missing 8 bits of entropy



So either way we are losing 8 bits of entropy, that DOES matter, because the possible combinations are shrunk.

In the 2nd testing method, we ought to get 2^11 combinations but we only get 2^3.

That is obvious security loss, and the 8 bits of entropy loss is real.


legendary
Activity: 1896
Merit: 1353
Again, you are wrong.

Just imagine for a second that the prefix passed to is_new_seed() is no longer 8 bits long, but 132 bits long.
Imagine, for the sake of the argument, that I have a seed that passes this test; its hashes starts with the 132 bits prefix required by is_new_seed().
That seed, by the way, was generated by 12 words randomly chosen from a 2048 dictionary.

So, is the entropy of the seed now zero?
If I follow your argument, it should be, because is_new_seed() has subtracted 132 bits of entropy.

From my point of view, the entropy is indeed zero, because I know the seed. Just like the entropy of anything I know with 100% certainty.

From your point of view, however, nothing has changed: you still need to enumerate a set of 2^132 candidate seeds in order to find the seed.

I hope this enlightens you.

For the record, I have written a paragraph on the only real issue here, which is how key stretching is affected.
http://docs.electrum.org/en/latest/seedphrase.html#security-implications
jr. member
Activity: 32
Merit: 1
...
Is that a correct assesment?

No that is not correct.

First, we are not talking about 2 but 8 bits. I do not know why you made that statement about 2 bits.


I think you misunderstood me, please re-read my post. Let me explain it better

I have tested that in hex mode, every input loss = output loss ( +/- some variance at certain bits, but it converges to this).


Which means that for N fixed output characters, you have N fixed input characters.

The code fixes 2 hex characters "01", and we know that 1 hex character is 4 bits. So that is 2*4 = 8 bits of entropy lost.






Second, we are not fixing the bits passed to bip32_root. The seed passed to bip32_root is not hashed with "Seed version", but with a different string. That assumption seems to be present in the last part of your reasoning.


Of course ,I havent said that it's like this, that was just a side tangent. I actually said that the bit32_root has nothing to do with this.




Third, and this is the most important point, there is no 'loss' of entropy.


Yes there is, because if you fix the output, you fix the input.

If SHA-512 is bijective, which it isnt entirely, but that is another issue. So if we suppose it is.

Then if we have 2^512 combinations a hacker has to solve to get the key.

If we fix 8 bits of entropy, that means only 504 bits of entropy for the SHA512, and only 124 bits of entropy for the input, since the input must conform to that criteria.

Because only 2^124 number of seeds translate to 2^504 bits of entropy of the SHA512, and the remaining 8 bits have 0 entropy.



Now here is the conclusion:

So you are saying that computationally this doesn't make a difference, but I disagree. Because entropy is just information, and it get's carried forward.

Because you have a -8 bit deficiency, and only 124 bits get passed on to the bip32_root to generate the xpriv and the child priv keys from there on.

It's not that the 2 SHA outputs collide, don't misinterpret my word, that is not what it is. It's that the is_new_seed function is limiting the possible inputs by 2^8 combinations, which result in a 8 bit entropy loss by definition.




So a default wallet creation goes like this:

1)  132 bits of initial entropy pool
2)  limited to 124 bits to create seed words at is_new_seed 
3)  124 bits to master private key at  bip32_root
4)  then 124 into bip32_private_derivation or whatever that creates child priv keys
5)  then 124 bit into ECDSA
6)  then 124 bit into RIPEMD-160
7)  Then Base58 encoding and you have a 124 bit address.

Furthermore, every hashing leaks about 0.5 bits of entropy if the input is smaller than the capacity of the hash, because the hashing functions are not bijective. Who knows maybe RIPEMD could leak up to 1 bits because the input almost depletes it's capacity. Not to mention that the BTC address also has an encoding scheme (they start with 1 or 3), so add that to it as well.

So you could easily end up with a 120 bit Bitcoin adress (in both spent and unspent state) despite the original entropy pool being 132 bits.

Whereas if we compensate for the lost 8 bits (in electrum), we could create a seed that is 168 bits, so that an unspent BTC address could be a maximum 156 bits strong. And a spent BTC address 124 bits.

Unfortunately the RIPEMD-160 is a very weak algorithm, that is where most of the entropy is lost. It should be replaced with something bigger of 256bit.


legendary
Activity: 1896
Merit: 1353
...
Is that a correct assesment?

No that is not correct.

First, we are not talking about 2 but 8 bits. I do not know why you made that statement about 2 bits.

Second, we are not fixing the bits passed to bip32_root. The seed passed to bip32_root is not hashed with "Seed version", but with a different string. That assumption seems to be present in the last part of your reasoning.

Third, and this is the most important point, there is no 'loss' of entropy.

Entropy is a measure of uncertainty in a system. It is relative, not absolute. It makes sense to talk about entropy only if you clearly define what is your prior knowledge of a system. In our case, we need to look at how many bits of uncertainty there is from the point of view of an attacker. In general, in order to crack a n-bits seed, an attacker needs to perform 2^n iterations of public key generation.

If we impose a constraint on the seed, namely that its hash starts with a given prefix of length m bits, this does not reduce the number of iterations an attacker has to perform. The attacker still has to enumerate 2^n seeds and test them. Therefore, it is incorrect to claim that we are reducing entropy. The only thing that changes is that the test function will return faster for invalid seeds (because it does not have key stretching). So, what we are losing is the benefit of key stretching on m bits. But we are not losing m bits of entropy.

To understand that these bits are not lost, consider an extreme case where the seed has 132 bits and the prefix has 64 bits. Would you say that we have lost 64 bits of entropy? no, because it has become incredibly difficult to generate a seed. An attacker still has to go thought these 2^64 iterations, before they can test each of the remaining 2^64 public keys.

Note that it is possible to express the benefit of key stretching in "bits", although that's a bit like adding oranges and apples. Nevertheless, if you consider that key stretching increases the number of "bits" of your seed, you have to understand that it only adds a constant. The strength of a seed increases exponentially with its length, and only linearly with the number of iterations of key stretching. What we are losing is a fraction of this constant.

jr. member
Activity: 32
Merit: 1
The seed has 132 bits and the length of the prefix is 8 bits. Therefore, 8 bits are "lost" by imposing the 0x01 prefix.

However, there is no way to enumerate seeds that hash with the desired prefix, other than brute force. Therefore, from a security point of view, these bits are not "lost"; an attacker still needs to use brute force in order to find valid seeds, just like they need to use brute force in order to attack the remaining bits.

There is still a weakening of the seed that results from the imposed prefix, because no key stretching is required in order to generate the prefix.
But it is wrong to express it in terms of "bits lost"; all you can say is that these 8 bits are easier to enumerate than the remaining 124 bits.


I am just an amateur, but I have analyzed the code.

There are 2 use cases of the SHA512, where the the HMAC is encoding it with:
  • "Seed version" in is_new_seed in the Mnemonic.py
  • And with "Bitcoin seed" in the Bitcoin.py in bip32_root function to generate the xpriv key

Now I assume that signing with different messages doesnt leak information, after all this is the point of the hash function.

So we are left with the Mnemonic.py is_new_seed function, where 2 bits of the prefix is fixed:

Code:
SEED_PREFIX      = '01'

(via version.py)


Now I have experimentally tested, the relationship with the input and output of the SHA512 function, and asked some cryptographers about it as well.

And it looks to me that by fixing 2 output bits, you lose 2 input bits (on average, it converges towards 2 bits +/- some variance at certain inputs). Or in general terms the input loss = the output loss, on average

And since the test is happening in hex

Code:
s = hmac_sha_512("Seed version", x.encode('utf8')).encode('hex')
return s.startswith(str(prefix))

1 hex char is 4 bits, therefore you have 2 bits * 4 bits = 8 bits.


  • So if the seed is <520 bits long, then its easier to crack the seed, than the SHA512
  • If the seed is 520 bits, then it's the same as SHA512
  • If the seed is >520 bits, then it's easier to crack SHA512



Is that a correct assesment?
legendary
Activity: 1896
Merit: 1353
The seed has 132 bits and the length of the prefix is 8 bits. Therefore, 8 bits are "lost" by imposing the 0x01 prefix.

However, there is no way to enumerate seeds that hash with the desired prefix, other than brute force. Therefore, from a security point of view, these bits are not "lost"; an attacker still needs to use brute force in order to find valid seeds, just like they need to use brute force in order to attack the remaining bits.

There is still a weakening of the seed that results from the imposed prefix, because no key stretching is required in order to generate the prefix.
But it is wrong to express it in terms of "bits lost"; all you can say is that these 8 bits are easier to enumerate than the remaining 124 bits.
legendary
Activity: 3682
Merit: 1580
I deleted my post above for a reason. I was not sure what i was talking about Smiley I read somewhere on the forum that the seed is effectively 124 bits after all the entropy loss due to the seed version/checksum. But then I read the code and I saw that it generates a 132bit random number. Then it keeps incrementing that number until it gets one that hashes to the correct seed version.

To answer your question is the entropy loss from that hashing worth 8 bits? Sorry IDK Smiley I'm not a cryptographer.
jr. member
Activity: 32
Merit: 1
Electrum uses a verification system for seeds, I am interested in the standard wallet only now.

It has:

Code:
def is_new_seed(x, prefix=version.SEED_PREFIX):
    import mnemonic
    x = mnemonic.normalize_text(x)
    s = hmac_sha_512("Seed version", x.encode('utf8')).encode('hex')
    return s.startswith(prefix)

Which basically after generating a random number of N entropy, check the seed for the new version seed, which is a SHA512 encoded seed with the phrase "Seed version", and which must start with "01"

For example:
Code:
015a2cb3e1eb920445455c380f4fdd026f17bd18e3ab06ecd7fda65e5340cebf955c228eaa88ff997a70a6172cd3960c3e237b462e6d2a61f259b5955a5cf510


It generates the following seed:
Code:
shaft dizzy alarm core deposit mandate off mixed cover size refuse protect




Now this procedure probably means an entropy loss, since we are shrinking down the haystack in order to find an output that starts with 01, which means that the first character of that string is fixed not random.


How much entropy does the seed lose by performing this versioning system on it?
Jump to: