Anyways, this gist of my post is that I didn't quite see the use, and that a normal bootstrap skips over blocks that you already have. But now that I more fully understand your reasoning behind this, and how much faster this actually will process the blocks that you already have... this is amazing! This could really be useful when you stop a bootstrap midstream and need to start it backup. Or if you have been out of sync for a month or two.
I guess I see things differently. I use satellite Internet and have a hard cap on monthly data transfers. I don't want to be downloading a relatively large bootstrap.dat if I know that I only need the last 2% of it. The fact that the CLAM client will take a few minutes to skip through the 98% I already have isn't that big a deal. The wasted file transfer is. (Chopping the big bootstrap.dat into little parts happened on remote servers with proper Internet connections and didn't involve my having to upload or download anything other than a few command lines over the satellite connection).
It would be nice to have this automajically in the client, to go out and download the bootstrap that is needed to get up to date....
The problem with that is that you're trusting me to provide the true longest chain. It adds another kind of centralisation to CLAM. Of course, the client validates every block it reads from the bootstrap file, and only adds it if it is valid, but I still don't like the idea of having the client know about my particular copy of it.
Having said that, what do people think about updating the checkpoints in the CLAM client? I bet that hasn't been done for quite a while now:
// What makes a good checkpoint block?
// + Is surrounded by blocks with reasonable timestamps
// (no blocks before with a timestamp after, none after with
// timestamp before)
// + Contains no strange transactions
//
static MapCheckpoints mapCheckpoints =
boost::assign::map_list_of
( 0, hashGenesisBlock )
( 6666, uint256("0x000002129d8a2b43509d2abb0aa24932b7af2f760e869d5952dee97d4b8ea8bf") )
( 10000, uint256("0x00000de398b1ec72c393c5c54574a1e1784eb178d683e1ad0856c12fac34f603") )
( 29000, uint256("0x068769a2ab0e35fc3ac31690158401b9538a7cce2a97096b22d47e50355b2e1f") )
( 175000, uint256("0xec64deeb7f1295216f20ce5dbe68b0bd28189a5a644a111e722c05451d51e66c") )
( 250000, uint256("0xb560c121438f630401c102767587b70cb0cc7d1e0c09114dd0b91455262aa64c") )
;
So we have checkpoints up to block 250k, staked on Sat Dec 13 15:14:24 2014, but nothing since. I think that means that theoretically MtGox (say) could dig up a whole load of CLAMs tomorrow, and use them to completely rewrite the chain from last December, wiping out all the transactions and blocks that have happened since.
The "reasonable timestamps" isn't an issue any more I don't think, since we no longer accept timestamps out of order.
Block 530000 was staked 5 days ago, has no weird times around it and contains only a simple staking transaction:
529995 Sat Jun 27 17:35:12 UTC 2015
529996 Sat Jun 27 17:36:00 UTC 2015
529997 Sat Jun 27 17:36:32 UTC 2015
529998 Sat Jun 27 17:36:48 UTC 2015
529999 Sat Jun 27 17:37:04 UTC 2015
530000 Sat Jun 27 17:37:20 UTC 2015
530001 Sat Jun 27 17:37:36 UTC 2015
530002 Sat Jun 27 17:39:12 UTC 2015
530003 Sat Jun 27 17:39:28 UTC 2015
530004 Sat Jun 27 17:40:00 UTC 2015
530005 Sat Jun 27 17:40:16 UTC 2015
I guess I'll
add a checkpoint for it.
Edit: note those 11 timestamps just above, and how they are all exact multiples of 16 seconds apart from each other. That's the 16 second window I keep going on about. As far as CLAM staking is concerned, there is no point of time between 17:40:00 and 17:40:16. Time passes in 16 second lumps.