Section 4.3 of the Xavier paper makes me wonder if they've read any of my many posts about setting an exponential difficulty function for reorgs.
Bah
So far they promote the ludicurous idea of "deflationary spiral" - meaning, the coints are so valuable they become worthless, and also propose the "novel" idea of
Checkpoints.
Countering “Revisionism” by Checkpointing the Past
We outline a distributed strategy to tackle the history-revision attack threat in a simple and elegant way
I mean, they even use the same fucking name ... would Googling it before publishing it as a new way to combat forks have been so hard?
I was referring to this
paper btw. Just finished reading it.
tl;dr - nothing new here, although it's a good intro into Bitcoin for newbies. I don't like how they claim to be innovative ... if they treated it as a review article and did a bit more research, I would be supportive.
As I often do, I spoke too soon. They mention the existing checkpoints (they call them Fiat Checkpoints) after a few paragraphs, but they qualify it:
Alas, there is no reason to trust a download of the software any more than one of the transaction history itself.
A claim which is false. No GPU/FGPA/ASIC farm in the world can change the published checkpoints ... so they do provide more security. Yeah, you have to trust the developers to trust the checkpoints, but the point is that everyone in the Bitcoin community already trusts the devs far more than they trust "the hash power of the Bitcoin network". So they do provide an extra layer of protection.
There might be some merit to develop a more intricate checkpoint system in the far future, but it's not in the list of top priorities for the Bitcoin project.
Read a bit onwards, the authors also "discover" the problem of a maleware stealing Bitcoins, and propose another "novel" approach - multi-sig. Also on their list of innovations are deterministic wallets, thin clients. They called it "Filtering Service" .. but it can't just filter blocks and still have the client verify the blocks relevant to him, because the blocks depend on each other ... so it's essentially just yet-another-thin-client-approach. I'll admit I haven't taken the time to understand their proposed filtering service protocol, but I don't understand how something between a thin and full client can function, so I won't bother (please correct me if I'm wrong and there is some
new bit of info in this paper after all).
Also, instead of Mixers they propose "Fair Exchange Protocol" as a way to implement a zero-trust Mixer. This has already been proposed a few times ... I believe Meni wrote about it.