I think the problem here is not the size of the blockchain itself. The problem is how the blockchain is handled.
Well there is room for improvement but the client doesn't use blocks the way you think it does so that leads to a lot of incorrect conclusions.
While using the client I found several problems. The first problem is that the client wastes bandwidth by downloading blocks that it already has. I suspect the P2P protocol does not have provision for a node telling "I already have block X, please send me blocks A, B and Z instead". I can see this problem in the debug.log file which is littered by "ERROR: ProcessBlock() : already have block X" where X runs from a certain number consecutively for several such messages then it jumps and again runs consecutively.
There are messages for requesting specific blocks and ranges of blocks. The client uses them. The issue may be a misbehaving client on the other end. Say you request block X to Z from client A and get no response. So you request block X to Z from client B. Client B responds and you process those blocks. At some point client A starts sending you blocks X to Z which you now already have. If you find a specific bug where YOUR client is requesting blocks it already has be sure to report it but make sure it is actually a bug.
The second problem is the "Reading the list of blocks" and "Validating blocks" actions which takes a lot of time. Well, my question is why the client needs to "read the list of blocks" and "validate the blocks" every time it starts up. Well, the "read the list of blocks" is not taking that much time but "validate the blocks" is 10 minute operation. You know, once the blocks are validated, why they need to be revalidated at every program startup ?
It doesn't validate all of them. It is done to ensure there has been no database corruption (possibly during the prior close due to a power failure). It only checks a limited number of the most recent blocks. You can from the config file adjust how many blocks to check and how detailed of a check to perform. You can even set this to zero blocks if you like.
The third problem is that the client is "jumping over the data like goat over cemetery" while doing these two actions. This is MUCH SLOWER than reading the data in sequence. Why it needs to jump over the data so much? Maybe implement some caching?
There is a cache. It is called the UTXO. Block are only used to create and update the UTXO. All validation of new txns and blocks is done against the UTXO. Once a block is written to the disk other than for responding to block requests from other peers (or updating the UTXO in a reorg) they aren't used by your client.
The fourth problem is why the program splits the blockchain into 125 MB chunks? That is inefficient in Windows where opening and closing a file is pretty expensive operation. In my blockchain directory the first 10 GB are stored in 5 files (well, in fact 10 because I need to count the revXXXXX files) because they were downloaded by a 0.6.3 BETA client but the remaining 9 GB is spread over 75 files. Is there a way to reconfigure these storage parameters? And once I change them, is there a way to tell the client to repackage the blockchain so it is stored according to my wishes? I prefer "few large files" over "many small files" on Windows because "many small files" is inefficient.
Older blocks are not needed except to provide blocks to peers who are bootstrapping. Saying you prefer large files over small files in all cases is a dubious request.
A similar problem is with the "chainstate" data which is only 0.5 GB but is littered into 229 files. Well, that might not be your fault as I understand that these fileis actually belong to some sort of general purpose database which was recently replaced and actually is much faster now but I believe that this data could be handled more efficiently if it was in a single file (maybe developing a special purpose database?)
Reinventing the wheel? The chainstate is stored in leveldb which is accepted as an incredibly lightweight and very fast key pair database. It is doubtful you would design an alternative custom database with similar functionality that outperforms leveldb. Also even if you could would the development time be worth reinventing the wheel rather than improving the actual client?
Also regarding the size of the blockchain, there are two things that should be done. The first thing is that the coinbase transaction can be as big as the miner wants (and some coinbase transactions weight few tens of KB, storing various stuff, see "Hidden Surprises in Bitcoin Blockchain" search on Google and especially
this blog) so putting a limit to it for example 128 or even 64 bytes would be good (but the limit should not be too small because otherwise we could run into a bunch of blocks with no solution).
The coinbase txns of all blocks represent <0.003% of the blockchain. The size is already limited by general limits on the size of ScriptSigs for all transactions. Seems a dubious use case.
And the second thing would be when storing the blockchain, extract the addresses and especially the public keys out of the block data, store them into some sort of index file and in the block data replace them with indices. That could reduce the size of the stored blockchain pretty significantly.
That cache is called the UTXO (the chainstate folder you dislike so much). Blocks are used to build the UTXO in a trustless manner. They aren't used to process or validate new blocks and transactions. The raw blocks are just used to bootstrap new nodes so they too can build the UTXO in a trustless manner.