Pages:
Author

Topic: Double hashing: less entropy? (Read 4050 times)

donator
Activity: 2058
Merit: 1054
June 12, 2012, 09:02:14 AM
#25
Right, this is because of the changing function, with some support from the continuity approximation.

Let's go to the extreme: Let's say after several rounds we end up with 2 elements, so k=2/N. Then after another round we should end up with 2-2/N) elements which is a bit less, and with additional rounds we converge to 0.

Of course, we cannot actually have a noninteger number of elements. But that's not a problem when we change the function each time. Most functions will map 2 elements to 2 elements; but after enough rounds we eventually stumble upon a function that maps both elements to one.

With a consistent function, this can't happen; either it sends them to 2 elements (which must be the same ones) or it doesn't. If it does then it remains at 2 elements forever without decreasing further.

And this doesn't have to happen when we're at 2 elements; after enough rounds, we've weeded out all elements that are not in the function's cycles, and what we're left with we're stuck with forever. And the set of cycle elements can be very large, though I have no idea just how large.

You've given me an idea on how to analyse the fixed-function case. You're better than me at this stuff, so correct me if I'm wrong.

Represent every element as a vertex in a graph. Then, add directed edges between vertices according to how SHA-256 maps elements onto other elements. The final set of cycle elements is all the stuff that is part of a cycle. The question is, how big is this set? I attempted a rough estimate, and was surprised at what I got.

The graph can be constructed by building chains/cycles. To begin, pick an element, run it through the random oracle, then add a directed edge between input and output. Then pick the output element, run it through the same random oracle ... etc. This builds up the first chain, and eventually this chain will loop back on itself, creating a cycle. How big is this cycle? Let N be the number of elements. After adding 1 edge, the probability that the chain will loop back on itself is 1 / N. After adding k edges, the probability that the chain will loop back on itself is k / N. Thus the probability that the first chain has k edges with no cycles is:
Code:
(1 - 1/N)(1 - 2/N)...(1 - k/N)
This can be approximated as (1 - k / (2N)) ^ k (as long as k is much smaller than N), and then approximated again as exp(-k ^ 2 / (2N)). Equating this to 0.5 reveals that the first chain, on average, has a length of roughly sqrt(N).

The rest of the graph can be constructed by picking an unconnected vertex and running it through the random oracle, building another chain. However, this time, the chain either loops back on itself (creating another cycle) or merges with a previous chain. Very roughly, the second chain has a 1/2 chance of creating another cycle and a 1/2 chance of merging with the first chain (because, its average length should be similar to the first chain's average length). Likewise, the ith chain has a 1/i chance of creating another cycle and a (i - 1)/i chance of merging with any one of the previous (i - 1) chains. The average number of cycles is then 1 + 1/2 + 1/3 + 1/4 + ...; the harmonic series. Even for 2 ^ 128 terms, this is only about 100.

My estimate for the size of the final set is: average cycle length * average number of cycles, and is very roughly, 100 * sqrt(N). For SHA-256, this is about 2 ^ 135. That's much lower than I expected! But to get to this, you probably have to go through an insane (like 2 ^ 128) number of rounds.
Looks ok, there are a few small issues but they don't matter much.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
legendary
Activity: 2940
Merit: 1333
June 11, 2012, 03:10:52 PM
#23
satoshi encouraged people to mine with gpus, he did foresee this.

Eh. I remember hearing the opposite. I probably remember wrong.

No, you're right.  He foresaw it, but discouraged it:

We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.
member
Activity: 78
Merit: 11
Chris Chua
June 11, 2012, 02:15:04 PM
#22
Right, this is because of the changing function, with some support from the continuity approximation.

Let's go to the extreme: Let's say after several rounds we end up with 2 elements, so k=2/N. Then after another round we should end up with 2-2/N) elements which is a bit less, and with additional rounds we converge to 0.

Of course, we cannot actually have a noninteger number of elements. But that's not a problem when we change the function each time. Most functions will map 2 elements to 2 elements; but after enough rounds we eventually stumble upon a function that maps both elements to one.

With a consistent function, this can't happen; either it sends them to 2 elements (which must be the same ones) or it doesn't. If it does then it remains at 2 elements forever without decreasing further.

And this doesn't have to happen when we're at 2 elements; after enough rounds, we've weeded out all elements that are not in the function's cycles, and what we're left with we're stuck with forever. And the set of cycle elements can be very large, though I have no idea just how large.

You've given me an idea on how to analyse the fixed-function case. You're better than me at this stuff, so correct me if I'm wrong.

Represent every element as a vertex in a graph. Then, add directed edges between vertices according to how SHA-256 maps elements onto other elements. The final set of cycle elements is all the stuff that is part of a cycle. The question is, how big is this set? I attempted a rough estimate, and was surprised at what I got.

The graph can be constructed by building chains/cycles. To begin, pick an element, run it through the random oracle, then add a directed edge between input and output. Then pick the output element, run it through the same random oracle ... etc. This builds up the first chain, and eventually this chain will loop back on itself, creating a cycle. How big is this cycle? Let N be the number of elements. After adding 1 edge, the probability that the chain will loop back on itself is 1 / N. After adding k edges, the probability that the chain will loop back on itself is k / N. Thus the probability that the first chain has k edges with no cycles is:
Code:
(1 - 1/N)(1 - 2/N)...(1 - k/N)
This can be approximated as (1 - k / (2N)) ^ k (as long as k is much smaller than N), and then approximated again as exp(-k ^ 2 / (2N)). Equating this to 0.5 reveals that the first chain, on average, has a length of roughly sqrt(N).

The rest of the graph can be constructed by picking an unconnected vertex and running it through the random oracle, building another chain. However, this time, the chain either loops back on itself (creating another cycle) or merges with a previous chain. Very roughly, the second chain has a 1/2 chance of creating another cycle and a 1/2 chance of merging with the first chain (because, its average length should be similar to the first chain's average length). Likewise, the ith chain has a 1/i chance of creating another cycle and a (i - 1)/i chance of merging with any one of the previous (i - 1) chains. The average number of cycles is then 1 + 1/2 + 1/3 + 1/4 + ...; the harmonic series. Even for 2 ^ 128 terms, this is only about 100.

My estimate for the size of the final set is: average cycle length * average number of cycles, and is very roughly, 100 * sqrt(N). For SHA-256, this is about 2 ^ 135. That's much lower than I expected! But to get to this, you probably have to go through an insane (like 2 ^ 128) number of rounds.
donator
Activity: 2058
Merit: 1054
June 11, 2012, 11:55:38 AM
#21
No, because the assumptions I made become less true the more rounds are done (maybe they're not even accurate enough after one round). The set of all possible images of SHA256^N becomes smaller for larger N until it converges to a fixed set (which is probably very large). Then SHA-256 is a permutation (one-to-one mapping) on this set. (This is true for every function from a space to itself).
But this is probably because I assumed that each round had its own independent random oracle. The results may be different for a fixed function like SHA-256.
Right, this is because of the changing function, with some support from the continuity approximation.

Let's go to the extreme: Let's say after several rounds we end up with 2 elements, so k=2/N. Then after another round we should end up with 2-2/N) elements which is a bit less, and with additional rounds we converge to 0.

Of course, we cannot actually have a noninteger number of elements. But that's not a problem when we change the function each time. Most functions will map 2 elements to 2 elements; but after enough rounds we eventually stumble upon a function that maps both elements to one.

With a consistent function, this can't happen; either it sends them to 2 elements (which must be the same ones) or it doesn't. If it does then it remains at 2 elements forever without decreasing further.

And this doesn't have to happen when we're at 2 elements; after enough rounds, we've weeded out all elements that are not in the function's cycles, and what we're left with we're stuck with forever. And the set of cycle elements can be very large, though I have no idea just how large.
member
Activity: 78
Merit: 11
Chris Chua
June 11, 2012, 11:35:08 AM
#20
No, because the assumptions I made become less true the more rounds are done (maybe they're not even accurate enough after one round). The set of all possible images of SHA256^N becomes smaller for larger N until it converges to a fixed set (which is probably very large). Then SHA-256 is a permutation (one-to-one mapping) on this set. (This is true for every function from a space to itself).

I thought it would be interesting to see what the entropy reduction is for multiple rounds. I assumed each round has its own independent random oracle which maps k * N elements to N potential output elements, where 0 <= k <= 1 and N is 2 ^ 256. For each round, I found that on average, exp(-k) * N output elements have no preimage. Therefore, each round maps k * N elements to (1 - exp(-k)) * N elements.

Iterating this, the entropy reduction (ignoring the non-uniform output distribution for now) is:
RoundCumulative entropy reduction
10.6617
21.0938
41.6800
82.4032
163.2306
324.1285
645.0704
1286.0381
2567.0204

I don't observe any convergence, and indeed the equation k = 1 - exp(-k) has one solution: at k = 0. But this is probably because I assumed that each round had its own independent random oracle. The results may be different for a fixed function like SHA-256.
donator
Activity: 2058
Merit: 1054
June 11, 2012, 10:52:32 AM
#19
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.

Assuming SHA-256 is a random function (maps every input to a uniformly random independent output), you will end up having (on average) (1-1/e)*2^256 different outputs. This indeed means a loss of entropy of about half a bit. Further iterations map a smaller space to an output space of 2^256, and the loss of entropy of each further application drops very quickly. It's certainly not the case that you lose any significant amount by doing 1000 iterations or so.
I took it one step further and considered the distribution among the outputs, which is not uniform; the result for the amount of entropy lost is (1/e)*sum(log_2 n / (n-1)!). But this is probably little more than a nice exercise, as SHA-256 is likely too far from random for this calculation to be meaningful.

And I mistakenly input ln rather than log_2 to the software, so the value I want really should be 0.827.
sr. member
Activity: 476
Merit: 250
Tangible Cryptography LLC
June 11, 2012, 08:58:22 AM
#18
Do people who design hash functions target these kind of properties or are they sort of
an after-the-fact implication of stronger sought-after properties of the function ?

It is property they are targeting.  The ideal hashing function is described as a random oracle
http://en.wikipedia.org/wiki/Random_oracle

No hashing function to date is a perfect random oracle but that is the ideal they are striving for.
legendary
Activity: 1176
Merit: 1011
June 11, 2012, 08:05:15 AM
#17
No, because the assumptions I made become less true the more rounds are done (maybe they're not even accurate enough after one round). The set of all possible images of SHA256^N becomes smaller for larger N until it converges to a fixed set (which is probably very large). Then SHA-256 is a permutation (one-to-one mapping) on this set. (This is true for every function from a space to itself).
Ah right, yeah. It would be interesting to know (or at least have a reasonable estimate) how large this conversion set is.

Intuitively I'd say it's quite sufficient to hold as a secure hash for the forseeable future, although I don't have any reasonable arguments for this.
legendary
Activity: 1050
Merit: 1000
You are WRONG!
June 11, 2012, 07:58:14 AM
#16
i should really stop posting in this thread, im embarrassing myself.
donator
Activity: 2058
Merit: 1054
June 11, 2012, 07:57:56 AM
#15
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.
Mmmh.

Does that mean that sha256 ^N(some 256bit input) for N sufficiently large
would always converge to the same value, independent of the actual input.
No, because the assumptions I made become less true the more rounds are done (maybe they're not even accurate enough after one round). The set of all possible images of SHA256^N becomes smaller for larger N until it converges to a fixed set (which is probably very large). Then SHA-256 is a permutation (one-to-one mapping) on this set. (This is true for every function from a space to itself).

First, do we actually *know* that sha-256 is *not* a one to one mapping on the 256 bit space ?
If it turns out to be, then you've got nothing. I don't know the answer, I'm not a professional cryptographer,
but looking at the code for SHA-256, there doesn't seem to be an obvious dropping of bits within
the transform step itself, but then I am too lazy to analyze it in-depth.
SHA-256, as a cryptographic hash function, aspires to be indistinguishable from random. If it was in fact random, the number of preimages for every 256-bit element would follow the Poisson distribution - about 36% would have no preimage, 36% would have one, 18% two, 6% three and so on. So I'd say it's almost certain that it's not a 1-1 mapping.
Very interesting observation, and you're probably correct.
Even more interesting would be to find a way to measure if that
is indeed the case (even if the verification is in a monte-carlo sense)
That's probably as hard as determining whether the function is broken.

Finally, what someone said: the likely intent of the team who designed bitcoin was to slow mining down, not to
add a layer of security there.
What could that mean? The difficulty controls the mining rate. If a hash function half as hard would be chosen, the difficulty would double and you'd have the same generation rate.

You're right, but then I'm not entirely convinced by the 'this buys use time the day sha-256 gets cracked' either.
If that was their concern, why not combine two wildly differing 256-bit hash algorithm and XOR the result ?
That probably could work too, but that also doesn't guarantee improves security. Maybe the two hash functions are actually connected in some unexpected way and the XOR is actually weaker than either.

SHA-256 itself is multiple iterations of a basic function. I'm guessing it is assumed that the basic function itself is ok and that the more times you apply it, the harder it is to attack.
legendary
Activity: 1176
Merit: 1011
June 11, 2012, 07:53:28 AM
#14
no! the number is wrong, the entropy after 512 rounds of sha256, will be below 0, if its true. which is not good or insignificant. its a huge error.

a acceptable reduction would be around 10^-80 bits/round

sorry dude you did your math wrong.
Eh no, it doesn't work that way. Decreasing entropy from 256 bit to (256-0.5734)=255.4266 bit means a reduction by factor 255.4266/256 = 0.997760156250

0.997760156250512 ≈ 0.317243314 so after 512 rouds, about 31.7% or 81.2 bits of entropy remains.

FYI: the standard Bitcoin-Qt client applies many tens of thousands hashing rounds (or sometimes hundreds of thousands, depending on your cpu speed) to create the master key for wallet encryption. Well, rest assured, all wallets do not have the same master key Smiley
legendary
Activity: 1072
Merit: 1189
June 11, 2012, 07:48:03 AM
#13
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.

Assuming SHA-256 is a random function (maps every input to a uniformly random independent output), you will end up having (on average) (1-1/e)*2^256 different outputs. This indeed means a loss of entropy of about half a bit. Further iterations map a smaller space to an output space of 2^256, and the loss of entropy of each further application drops very quickly. It's certainly not the case that you lose any significant amount by doing 1000 iterations or so.
sr. member
Activity: 476
Merit: 250
Tangible Cryptography LLC
June 11, 2012, 07:33:41 AM
#12
There are lots of "quirks" with the protocol but the core of it remains solid and sometimes surprisingly inclusive.

Rather than iterative uses of a single hashing function using two hashing functions would have provided more resistance in the event that SHA-256 is significantly degraded.

i.e. RIPEMD160(SHA-256(input)+input)

ironically a modified version of this structure is used for creating addresses from public keys but not hashing.
legendary
Activity: 1050
Merit: 1000
You are WRONG!
June 11, 2012, 07:32:36 AM
#11
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.
Ah, interesting. Well if that's true, the problem seems insignificant, although it could have been prevented entirely by an approach like I posted above (just an example, obviously there are numerous alternatives). And still covering the vulnerability for potential cracks in sha256.

no! the number is wrong, the entropy after 512 rounds of sha256, will be below 0, if its true. which is not good or insignificant. its a huge error.

a acceptable reduction would be around 10^-80 bits/round

sorry dude you did your math wrong.
legendary
Activity: 1176
Merit: 1011
June 11, 2012, 07:24:36 AM
#10
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.
Ah, interesting. Well if that's true, the problem seems insignificant, although it could have been prevented entirely by an approach like I posted above (just an example, obviously there are numerous alternatives). And still covering the vulnerability for potential cracks in sha256.
donator
Activity: 2058
Merit: 1054
June 11, 2012, 07:15:53 AM
#9
I did a calculation which says that every application of SHA-256 reduces entropy by about 0.5734 bits. I have no idea if that's correct.

The reason for this sacrifice is almost certainly to prevent cracks in SHA-256 from being immediately translated to an attack on Bitcoin hashing.

First, do we actually *know* that sha-256 is *not* a one to one mapping on the 256 bit space ?
If it turns out to be, then you've got nothing. I don't know the answer, I'm not a professional cryptographer,
but looking at the code for SHA-256, there doesn't seem to be an obvious dropping of bits within
the transform step itself, but then I am too lazy to analyze it in-depth.
SHA-256, as a cryptographic hash function, aspires to be indistinguishable from random. If it was in fact random, the number of preimages for every 256-bit element would follow the Poisson distribution - about 36% would have no preimage, 36% would have one, 18% two, 6% three and so on. So I'd say it's almost certain that it's not a 1-1 mapping.

Finally, what someone said: the likely intent of the team who designed bitcoin was to slow mining down, not to
add a layer of security there.
What could that mean? The difficulty controls the mining rate. If a hash function half as hard would be chosen, the difficulty would double and you'd have the same generation rate.

Arguably, they failed because they didn't foresee the length at which people would
go to mine coins (first GPUs, then FPGAs, then dedicated ASICs).
Of course they foresaw all of this, if not the timing of their advent.

Had they realized, they would have added an scrypt-like round to the hash step.
The hash function should be easy to verify - each application should be fast but block generation requires many applications. Choosing a slow hash function would be counterproductive.

satoshi encouraged people to mine with gpus, he did foresee this.
Eh. I remember hearing the opposite. I probably remember wrong.
I know of one comment Satoshi made about GPUs, and it wasn't an encouragement:
We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.
legendary
Activity: 1176
Merit: 1011
June 11, 2012, 07:06:05 AM
#8
Sad don't write recursive code. its bad. for the stack and people mind.
Well it was pseudocode, just to get the general idea.

In more detail: (this does something different than the example above)
Code:
function DeepHash( input , depth )
{
  h = '';
  while(0≤(depth--)) h = Hash(h+input); // where Hash(x) is a regular hasing function, e.g. sha256
  return h;
}

DeepHash(input,1) is the same as Hash(input)
DeepHash(input,2) gives Hash(Hash(input)+input)
DeepHash(input,3) gives Hash(Hash(Hash(input)+input))+input)
etc.

Quote
also your code is kind of broken... what does Hash with only one input?
It was a polymorphic function, a two-parameter alternative to the already existing Hash function with one input (i.e. regular sha256).
legendary
Activity: 1050
Merit: 1000
You are WRONG!
June 11, 2012, 06:38:36 AM
#7
Code:
Hash(input,depth) :=
  Hash(input) if depth==1
  Hash(Hash(input)+input,depth-1) otherwise
(where 'Hash' can be any regular hashing function, such as sha256)
Sad don't write recursive code. its bad. for the stack and people mind.
also your code is kind of broken... what does Hash with only one input?

legendary
Activity: 1176
Merit: 1011
June 11, 2012, 06:35:29 AM
#6
The way I understand it is that adding more rounds makes it harder to find a short cut/weakness.
The fixed length for the second sha256 stage also removes one type of "attack" often used against such algorithms.
OK, nonetheless I don't see any demerits (yet it does overcome certain risks) from using the alternative double hashing scheme I mentioned above. Or more generally:

Code:
Hash(input,depth) :=
  Hash(input) if depth==1
  Hash(Hash(input)+input,depth-1) otherwise
(where 'Hash' can be any regular hashing function, such as sha256)
Pages:
Jump to: