There are seemingly only two valid reasons to hash the public key:
1) you think that the public key scheme is vulnerable in the long term
2) you want to separate long term and short term security.
I already told you that if the public key were exposed for a longer (indefinite!) time, so you would need to increase the security of the public key. But to what level given quantum computing may be coming?
And 256-bit was about the upper limit of what was available and well accepted in 2008.
Well, this is the kind of cryptographic "common sense" that doesn't make sense. As I said before, one has to assume, in a cryptographic design, that the cryptographic primitives are about at the security level that is known - for the simple reason that one cannot predict the deterioration of its security level by future cryptanalysis. As far as one goes, it can be
total.
Let us take a very simplistic example to illustrate what I mean (it is simplistic, for illustrative purposes, don't think I'm cretin like that
). Suppose that we take as a group, the addition group modulo a prime number, and that we don't know that multiplication forms a field with it. We could have the "discrete log" problem in this group, where "adding together n times" the generator g, a random number between 1 and p-1, is the "hard problem to solve", exactly as we do in an elliptic group. Suppose that we take p a 2048 bit number. Now THAT's a big group, isn't it ? Alas, the Euclidean division algorithm solves my "discrete logarithm" problem as fast as I can compute the signature !
2048, 4096, 10^9 bit key, it doesn't matter: the difficulty of cracking goes polynomially with the difficulty of using it ! (Here, even linearly!).
So the day that one finds the "Euclidean division" in an ECC, it is COMPLETELY BROKEN. The time it takes a user to calculate his signature, is the time it takes about, for an attacker to calculate the secret key from the public key. As such, the ECC has become a simple MAC, and it doesn't even last 3 seconds once the key is broadcast.
--> if we assume that ECC will be broken one day, bitcoin's crypto scheme is IN ANY CASE not usable. This is why the "common sense" in cryptography, of "protecting primitive crypto building blocks because we cannot assume they are secure" is just as much a no-go as the other common sense of security by obscurity. It sounds logical, but it is a fallacy. You think ECC will be broken, don't use it. And if you use it, accept its security level as of today. Because you cannot foresee HOW HARD it will be broken, and if it is totally broken, you are using, well, broken crypto.
Now, what is the reason we can allow LOWER security for the exposed public key, than for the long-term address in an output ? The reason is a priori (
and I also fell into that trap - as I told you before, my reason for these discussions is only to improve my proper understanding and here it helped) that the public key needs only to secure the thing between broadcasting and inclusion in the chain. But as you point out, that can take longer if blocks are full than 10 minutes. This can be a matter of hours. Also, in micro channel spending, you have to expose your public key to the counter party for the time the channel is open.
Now, if we are on a security requirement of days or weeks, then there's essentially not much difference between days or weeks, and centuries. The factor between them is 10000 or so. That's 16 bits. A scheme that is secure for days or weeks, only needs 16 bits of extra security, to be secure for centuries ====> there is no reason to nitpick on 16 bits if we are talking about 128 bits or so.
There is no reason to introduce "short term security" if this is only 16 bits less than the long term security level.
In other words, if you are afraid that 160 bits isn't good enough in ECC for the long term, well, then 128 bits (as it is now) is not good enough either in the short term. If you think a "quantum computer" can crack a 320 bit ECC key in 50 years, then that quantum computer will be able to crack a 256 bit ECC key in less than a day.
So you may very well protect an address with an unbreakable hash of 160 bits for 50 years your quantum computer breaks its teeth on, the day that you use that address in a micro-payment channel, by the evening the key is cracked.
You are not accurately accounting for the savings in portion of UTXO that must be stored in DRAM (for performance) versus what can be put on SSDs. Without that in DRAM, then the propagation time for blocks would be horrendous and the orphan rate would skyrocket (because nodes can't propagate block solutions until they re-validate all transactions due to the anonymity of who produced the PoW).
Of course not. You don't have to keep all UTXO in DRAM of course. You can do much smarter database lookup tables. If the idea is that a node has to keep all UTXO in RAM, then bitcoin will be dead soon.
Satoshi just nailed you to the cross.
Nope, Gavin Andresen is talking bullshit and confuses cryptographic hashes and lookup table hashes.
http://qntra.net/2015/05/gavin-backs-off-blocksize-scapegoats-memory-utxo-set/If you need to keep a LOOKUP HASH of UTXO, then that doesn't need cryptographic security. There's no point in having 160 bit hashes if you can only keep a few GB of them in memory ! 160 bit lookup hashes means you expect of the order of 2^160 UTXO to be ordered. Now try to fit 2^160 things in a few GB of RAM
You only need about a 48 bit hash of the UTXO to keep a database in RAM. That doesn't need to be cryptographically secure. Completely crazy to keep 160 bit hashes as LOOKUP HASHES in a database hash table ! And there are smarter ways to design lookup tables in databases than keeping a long hash table in RAM, ask Google
I'm not even putting this on the back of Satoshi. I claim he made sufficient errors for him not to be a math genius but he is a smart guy nevertheless. I can criticise him because of hindsight, I'm absolutely not claiming to be at his level. But I claim that he's not of the type of math genius as a guy like Nash. This is the kind of argument I'm trying to build.
But SUCH stupid errors, I don't even think Satoshi is capable of. It is Gavin Andreesen who is talking bullshit to politically limit block size. If ever it is true that RAM limits the amount of UTXO in a hard way, then bitcoin is dead from the start. But it isn't.
This is a very interesting read BTW:
http://satoshi.nakamotoinstitute.org/emails/cryptography/2/>Satoshi Nakamoto wrote:
>> I've been working on a new electronic cash system that's fully
>> peer-to-peer, with no trusted third party.
>>
>> The paper is available at:
>>
http://www.bitcoin.org/bitcoin.pdf>
>We very, very much need such a system, but the way I understand your
>proposal, it does not seem to scale to the required size.
>
>For transferable proof of work tokens to have value, they must have
>monetary value. To have monetary value, they must be transferred within
>a very large network - for example a file trading network akin to
>bittorrent.
>
>To detect and reject a double spending event in a timely manner, one
>must have most past transactions of the coins in the transaction, which,
> naively implemented, requires each peer to have most past
>transactions, or most past transactions that occurred recently. If
>hundreds of millions of people are doing transactions, that is a lot of
>bandwidth - each must know all, or a substantial part thereof.
>
Long before the network gets anywhere near as large as that, it would be safe
for users to use Simplified Payment Verification (section
to check for
double spending, which only requires having the chain of block headers, or
about 12KB per day.
Only people trying to create new coins would need to run
network nodes. At first, most users would run network nodes, but as the
network grows beyond a certain point, it would be left more and more to
specialists with server farms of specialized hardware. A server farm would
only need to have one node on the network and the rest of the LAN connects with
that one node.The bandwidth might not be as prohibitive as you think. A typical transaction
would be about 400 bytes (ECC is nicely compact). Each transaction has to be
broadcast twice, so lets say 1KB per transaction.
Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.If the network were to get that big, it would take several years, and by then,
sending 2 HD movies over the Internet would probably not seem like a big deal.
Satoshi Nakamoto
---------------------------------------------------------------------
The first piece in bold is the network configuration we talked about earlier: the backbone of miner nodes, and all others directly connecting to it, no more P2P network. (has nothing to do with the current subject, but I thought it was interesting to note that Satoshi already conceived the miner centralization from the start).
The second part is indeed considering bitcoin
scaling on chain to VISA-like transaction rates, with the chain growing at 100 GB per day. He's absolutely not considering a P2P network here, but a "central backbone and clients" system.
The point however, is the fact that most certainly, he doesn't think of any RAM limits on cryptographic hashes and hence on the maximum amount of existing UTXO permissible.