Also, I'm afraid it's very easy to say "just test for longer" but the reason we started generating larger blocks is that we ran out of time. We ran out of block space and transactions started stacking up and not confirming (ie, Bitcoin broke for a lot of its users). Rolling back to the smaller blocks simply puts us out of the frying pan and back into the fire.
We will have to roll forward to 0.8 ASAP. There isn't any choice.
I would like to understand with better precision what you mean by this. Can you point to a particularly enlightening bit of documentation or discussion about this issue?
From your brief description, it seems to me that this is one of the most show-stopper of deficiencies in Bitcoin design.
This because no matter whether the block chain remains small and universally accessible, or grows huge and accessible only to those with state of the art data processing systems, it can always be expected that we run up against block size limits at certain time periods. If doing so causes a 'blood clot' in the system, it seems to me that a high priority line of development should be in figuring out how to dismiss the stagnant transactions such that they don't cause ongoing issues.
I cannot see switching to a more efficient database being anything but an uphill foot-race which will ultimately be lost even if Bitcoin evolves to a situation where it is only run by cluster-type 'supernodes' with fat pipe connectivity and the blockchain in RAM. Even if we do go this direction, sorting out the 'backed up transactions' issue while the system is yet small seems like a good idea.