In regards to the bit errors: the issue is error detection not error correction. It is a well known problem since around 2000, when the database systems started to be deployed on desktops and mobiles, no longer only on the server-class systems. This was also the time when the majority of the desktop systems no longer had even the parity error detection like the original IBM PC and clones. The silent, undetected corruption is such a widespread problem that most modern commercial database systems include in software page parity error detection and torn i/o detection (a closely-related problem with non-server class i/o subsystems).
Nowadays the situation is much worse: even the brands like Apple which formerly were beyond reproach now mass-ship the machines that reproducibly suffer bit-errors under load. Some modern game engines (yes, game engines, not database engines) include on-the-fly hardware error detection for CPU/RAM/GPU.
So the question still remains: which DB engine to choose? The answer is the same as for the old choice of cars: cheap, fast, reliable; pick two. I had a similar conversation in another thread and I managed to condense to a short soundbite that doesn't require computer science education and any MBA-type could understand it.
2112: Dude, prototype first, then make a choice.
etotheipi: Die in a fire! AMD, NVidia, Intel or GTFO?
2112: No really, there are abstraction layers that will allow you to make that selection last, once you exactly know and can measure your needs.
etotheipi: OK, I hear ya. Qt looks like a decent layer that will isolate me from the vagaries of graphic display market. It looks like pain it the neck, but I need to learn some way of not painting myself into the corner.
2112: Hurray!
I cannot advise everyone to take an intriductory database course. But if you have just a couple of hours of time read this article from wikipedia:
http://en.wikipedia.org/wiki/View_(database)
If you take one thing from it: thanks to views the logical storage schema can be different than physical storage schema. With this klowledge you will be ahead anyone who hawks any single database choice.
The reality of Bitcoin could be summarized as follows:
1) nobody has any reliable data describing and modeling the access patterns for Bitcoin storage systems.
2) Bitcoin developers routinely work in a way that isn't representative of normal business operation: they constantly reload the blockchain from scratch. Any problem? Delete ~/.bitcoin/* or %AppData%/Roaming/Bitcoin and redownload everything.
3) people who try to run 24*7 Bitcoin services are at a serious disadvantage: they cannot do normal livedatabase backups; the filesystem snapshots they can make are not ACID and not internally consistent; database consistency cannot be checked while online. More and more they find themselves in the situation where the troubleshooting resembles the old MS-DOS days: press Ctrl-Alt-Del, if that doesn't work, unplug the computer and plug it back.
4) even minimal storage schema tuning will show that storing blockchain in the raw on-the-wire format is far from efficient. There are three really disjoint data subsets in the raw blockchain:
4a) block headers chain or tree/trunk/orphan-branches
4b) merle trees, each used only once
4c) heap of transactions that could be extensively garbage-collected.
5) creating a separate database-loader tool for whatever blockchain representation you use is the most crucial task for the Bitcoin business operators.