I have read that, and I don't agree. I'm far more inclined to side with Gavin on the issue of block size. Bitcoin will either scale up or become irrelevant. These are the only two viable outcomes I see.
However this is now offtopic, so I'd rather keep discussions of your "keep bitcoin free" campaign outside of this thread.
Have some imagination: if the attack can cause technical problems with Bitcoin for whatever reason, there will be people and organizations with strong incentives to carry it out. For instance note how you could create a block that's fraudulent, but in a way where proving that it's fraudulent requires a proof that most implementations can't process because of the ridiculously large UTXO path length involved - an excellent way to screw up inflation fraud proofs and get some fake coins created.
You are arguing strawmen: You assume that nodes will validate blocks probabilistically, and then show how this will result in possible fraud. The problem lies in the security model
you are proposing, namely reliance on fraud proofs instead of full validation.
Out of curiosity, have you ever worked on any safety critical systems and/or real-time systems? Or even just software where the consequences of it failing had a high financial cost? (prior to your work on Bitcoin that is...)
Yes.
Creating those outputs requires 0 BTC, not 14,675BTC - zero-satoshi outputs are legal. With one satoshi outputs it requires 27BTC. That you used the dust value suggests you don't understand the attack.
So now you're assuming that an attacker will be able to successfully mine 10,000 blocks in order setup a one-off 20-minute DoS? I'm pretty sure it'd be cheaper to just buy the 14,675BTC. If someone who wants to destroy or defraud the bitcoin network is able to mine that many blocks in preparation, it's either a *really* long con or Bitcoin is hosed for so many other reasons.
I'm not arguing that it is impossible. I'm arguing that it is not even remotely close to economically plausible, and so far beyond other attack vectors that it's not worth worrying about.
But even if that weren't the case, there are fundamentally simple ways to deal with the issue: introduce a hard limit on the number of index node updates per-block, just as the number of signature operations is currently limited.
As I say, such extreme differences make it very likely that you'll run into implementations - or just different versions - that can't handle such blocks, while others can, leading to dangerous forks. Look at how the March fork happened because of a previously unknown limit on how many database operations could happen in one go in BDB.
That's a wonderfully general argument you can use against anything you don't like. Implementation bugs can exist anywhere. There's no reason to suppose that we're more or less likely to run into them here than somewhere else.
The performance cost of SHA256^2 compared to SHA256 is a lot less than 2. Heck, I'd bet you 1BTC that even in your existing Python implementation changing from SHA256 to SHA256^2 makes less than a 10% performance difference, and the difference in an optimized C++ version is likely unmeasurable. I really hope you understand why...
Enlighten me, please.
EDIT: After thinking about this for some time, the only conclusion I can come to is that you are assuming multiple blocks on the first pass. This is only true of about half of the nodes (most of which are two blocks), the other half are small enough to fit in a single block. While not a performance factor of 2, it's definitely >= 1.5.
A re-org is a failure of consensus, though one that healed itself.
That's not a very useful definition of consensus. Reorgs happen even without malicious intent, and the network is eventually able to reach agreement about older blocks. A real error is when one node
rejects a block that other validating nodes accept - no amount of time or work built on the other chain will convince that node to switch. If this proposal is susceptible to
that kind of validation error, I'd be interested to know.
Currently re-orgs don't happen too often because the difference in performance between the worst, average, and best performing Bitcoin nodes is small, but if that's not true serious problems develop. As an example when I was testing out the Bloom filter IO attack vulnerability fixed in 0.8.4 I found I was able to get nodes I attacked to fall behind the rest of the network by more than six blocks; an attacker could use such an attack against miners to construct a long-fork to perform double-spend attacks.
If you know a way to do the same with this proposal, which isn't easily solved by processing blocks in parallel, and which doesn't require an additional network DoS vulnerability, please let me know. Until then there are simply too many "what if"s attached.