This sounds interesting! At the basic level, you could define it thusly:
When preparing transactions for inclusion in a block, a miner will use up to N kb of the block space of the block to "refresh" old, unspent outputs: Search the blockchain, starting at the genesis block, and progressing to no further than the current height minus 52,560 (skipping blocks less than one year old: 1 block every 10 minutes => 144 blocks/day => 52,560 blocks/year), search for outputs that are unspent. If one is found, find all other outputs to that same address that are unspent anywhere in the blockchain (and have the same script as the original unspent output; non-standard script transactions wouldn't be combined with any others) up to the current height minus 1,008 (don't process blocks less than a week old, in case there's a blockchain fork at the moment). Create a new "refresh" transaction that uses all those unspent outputs as input, and assigns them a single new output.
The "refresh" transaction would have to be a new type of transaction, since the miner doesn't know the private key of the address in question, they can't fulfill the requirements to spend those outputs (even to spend it back to the same address). So a normal transaction could use the transaction ID of the "refresh" transaction as an input, but in order for clients to validate a spend of a "refresh" output, they go look up the block that has the refresh transaction, and from that, look up the inputs of that transaction individually to make sure the Script of the new transaction fulfills the requirements of the multiple inputs. In that way, the "refresh" transaction is simply an alias to several other outputs.
The
N kb would have to be adjusted such that it covers all the unspent transactions of the prior year in ~25,000 blocks (half a year's worth of blocks; gives enough breathing room to ensure all transactions that need to be refreshed are refreshed in 52,000 blocks).
Using this as a guide, the goal would be that no output would be more than a year old. Non-standard Script transactions, or old adresses with only one unspent input would still get refreshed each year, to keep them up-to-date. Then a client only needs to grab the most recent year's worth of blocks (faster and less hard-drive-space-intensive than caching the whole blockchain in order to be up-to-date). If then a new transaction shows up that uses a refresh transaction as input, only then does the client need to go back to the P2P pool and request the missing block.
Using this model,
someone has to have the full blockchain on hand, but most people's clients would only need to cache the most recent year's worth of blocks to be up-to-date.
If we wanted to make it where
no one would have to keep the full blockchain around, that would take more thought, since what would happen if some corruption happend in the "live" blockchain that required going back to the "archived" origin of the blockchain that no one had around any more? It would be best if the clients had logic to hang on to any blocks that had transactions for addresses in the client's wallet, so they could be the peer node to share that block whenever another node needed it.
So, if Alice has address
A, and in 2010 she had 10 transactions that each gave her 1 BTC, she now has 10 BTC. If Alice were then hit by a bus (sorry Alice) and falls into a coma, address
A will have 10 BTC sitting in it for a while. In 2011, miners would sweep the 10 transactions to address
A into one output (let's call it
RO_2011 for "refresh output 2011"). In 2012, some block would take
RO_2011 as input and create
RO_2012. In subsequent years, this would be repeated (
RO_2012 becomes
RO_2013,
RO_2014, etc.).
If Alice never wakes up, the blockchain now has one transaction worth of "plaque" that gets carried along year after year. Miners still have to carry it along, but no client cares about it, since it's not getting spent, so they don't have to look up the former transactions to validate it. But let's say Alice does wake up in 2015, and pays for her hospital stay with her 10 BTC (who thinks the exchange rate in 2015 will be enough to cover a 5 year coma care with 10 BTC? Anyone?). Her bitcoin client collects the most recent year's worth of blocks, and sees that
RO_2015 gives address
A 10 BTC, and so shows Alice that she does have 10 BTC on-hand. Alice creates a new transaction spending
RO_2015 to the hospital's bitcoin address
H, and sends that to the mempool. Now, whoever wants to mine that transaction needs to query the network for the block that contained
RO_2014 (which
RO_2015 references),
RO_2013,
RO_2012, and backward to the original 10 transactions that gave Alice her BTC. That action would be a slight strain on the network to validate all that, but once that completes, a new transaction that spends
RO_2015 and turns it into a new (normal) output to address
H.
RO_2015 and back will never be needed again, and will not be refreshed any more. The new output to address
H will be refreshed in 2016 if the hospital doesn't spend it, though.
Note, this falls apart if while Alice was in a coma, no client kept the blocks with
RO_2011 through
RO_2014. If Alice left her computer on and her Bitcoin client running, it would keep those blocks, since it knew address
A was important. But if no client in the network kept those blocks, she's SOL in this proposed model. This could possibly be avoided if the Script from the original transactions were included in the Refresh transactions (standard transactions to the same bitcoin address all have the same Script, right? So only one script is needed to represent the 10 transactions originally combined), which would allow Alice to prove she can solve that Script and claim the outputs. That would add a little more bloat to the refresh transactions; is that worth it?