Hi everyone!
I've been a lurker on these forums for a while, but now I actually have a genuine question so I figured I'd start getting those newbie posts out of the way. I'm almost certain this has been asked before so it's perfect fodder for the newbie forum, but I couldn't find it anywhere. So here goes.
The
Stanford paper pointed out two interesting things.
First, as long as Moore's law holds, the cost of attacking the entire blockchain is proportional to the cost of attacking the last hour. Or day. Or whatever. Granted, that's a large proportion, but that seems to be exactly the fear that led to Bitcoin's current checkpointing solution. But centralized checkpoints suck.
Second, you can achieve a sort of decentralized checkpointing by making old nodes skeptical of blocks that would invalidate blocks they themselves witnessed a long time ago. So revisionist histories would require more and more proof of work the more (and older) blocks they invalidated.
This seems like an awesome idea, but since the paper was published in 2012 and Bitcoin hasn't done this yet, I'm assuming there's something wrong with it. What am I missing?
There is an assumption that the reduction of the block reward will reduce the hashing power of legitimate miners, making it very easy to attack the block chain. This is far from certain, but it is a possibility. In fact, the first halving did cause a severe drop in mining, but it was only temporary, demonstrating that the assumption may be wrong.
There is no reason with the checkpointing can't be decentralized. However, there seems to be a risk that a higher threshold for revising a block chain could result in a permanent fork.