The blockstream satellite node software contains a
compressor that shrinks transactions by about 20% operating a transaction at a time. (More could have been achieved by working multiple txn at a time, but at a great increase in software complexity and some loss of functionality).
It exploits all the redundancies described by pooya87 above and quite a few additional ones. E.g. it knows how to reconstruct the hash in the redeemscript for P2SH embedded segwit, it knows how to convert 65 byte pubkeys to and from 32 byte pubkeys, etc. I think, though my memory is kinda fuzzy, it exploits reused scriptpubkeys/redeemscripts within a transaction.
Though your network needs to be fairly slow or your CPU extremely fast before its worth the CPU cost (for satellite it's an obvious win).
Blocks are random data and can not be compressed with any algorithm other than one that focuses on bitcoin applications.
Technically thats not entirely true, due to things like pubkey and address reuse and common syntax elements, state of the art generic compressors do get some compression when working on one or several whole blocks at a time. But not anywhere near as much compression as a data aware compressor can achieve. The two can be combined, of course. But having to work one or more blocks at a time gets in the way of random access to data and potentially creates DOS vulnerabilities (e.g. from peers forcing you to decompress groups of blocks only to access one block in the group).
It's more efficient to use this stuff if you negotiated with peers sending the compressed form over the wire. If all your peers were doing that, you could just keep the compressed form in memory and on disk and not have to decompress except for validation.
AFAIK no one is working on taking this work upstream, which is sad-- a 20% reduction in disk space would be nice, and post erlay this would also end up being nearly a 20% reduction in network bandwidth if used for transaction/block relay.
If it were just done for storage probably the best thing would be store every block using it, and just keep a small cache of uncompressed blocks in memory for serving requests for recent blocks. Any kind of rewriting of block data on disk is complicated due to the need to be able to recover from crashes.