Ok, some picking apart, since you asked nicely:
1. Trying to keep a large network with a sync of a few seconds at most:
a. Is rather difficult, especially in non-controlled decentralized environments (How do you get people to agree on a clock time without falling prey to a Sybil attack at that level? Bitcoin cheats by simply copying the time it gets from the system clock, and hoping that it's less than 2 hours away from the rest of the nodes).
b. Only decreases the likelyhood of an accidental fork. Shenanigans can still occur. If I know the exact cut-off time, I can arrange to send a last-second block to about half the network. What happens? They wait a little longer? Then I can DOS by continuing to send them these (easyish to mine) blocks?
I have been thinking about this particular problem and I believe the key isn't in agreeing to a universal time, but elapsed time. I know exactly how much time has elapsed in the past 10 minutes with very little drift. Every node sending me messages also knows exactly how much time has passed. Furthermore, almost every node has access to NTP so has a reasonable estimate of what to expect. Every node would only build off of blocks that had relatively accurate times (in their opinion) and every node could validate that every other node waits at least 10 minutes before sending them candidates for the next block. So there is no need to agree on universal time, only to vet that other nodes only broadcast at the proper intervals. Nodes violating this social behavior would be disconnected. The network would develop a rhythm. So lets define a simple heuristic that a node could use with near zero knowledge... it will only forward new blocks it receives 10 minutes after receiving the last block it accepted. The node knows who informed it of that last block and thus could instantly detect other nodes 'cheating' and sending too soon.
This approach should be entirely resistant to Sybil because all information I rely upon is given to me directly. I am still at risk of someone entirely controlling all of my connections and sending me bogus blocks. But this is only really an issue for 'first time connections that are boot strapping'. Once you have a large local database you can reconnect to enough nodes to be confident you are not getting a bogus block.
2. Counting on clients using mempool to decide which blocks are best, and avoiding forks while doing that, means agreement on what transaction are in the mempool.
A node could be particularly picky, but this does not work like Ripple in a very critical way. There is still a Proof of Work (how ever minimal) that is used to
elect someone to choose which transactions are included in the block. There is also profit made for having your block accepted by others that you would lose by including an invalid transaction. As a result, POW is used to solve the mempool selection problem, and 'consensus' / desire to win the next block will provide profit motives to accept any reasonable block. An unreasonable block would be any block that failed to include many valid transactions. These blocks would not be broadcast across the network even with their proof of work.
3. Chain volume to resolve forks is trivial to fake for any semi-competent attacker. Send money in circles and bam!, you're the main chain.
Chain volume is easy to fake, but not volume-by-coinage. Furthermore, all of these fake transactions would have to be broadcast to the entire network and the vast majority of the fees would be paid as dividends to everyone else and thus not go to the attacker. This attack relies upon isolating an individual and all of their connections to the valid chain. I suspect that if someone were able to isolate a user on the bitcoin chain, they would also be subject to the same attack... *except* that they wouldn't be able to maintain the difficultly level.
With BitShares all users are required to move their funds once per year to prove ownership of the private key. These expected transactions could not be faked and in the time it took to synchronize with the network one would quickly notice a pattern of these 'about to expire accounts' failing to move.
4. Comparaison of notes off-chain and what not essentially pushes the trust and verification problem to another layer. It's a nice to have but fundamentally doesn't solve anything.
Bitcoin has everyone comparing notes off chain on what the expected difficultly should be. If we didn't check the expected difficulty & gensis block then anyone with an ASIC could generate 4 years of history in no time and then trick anyone who connected to their bogus nodes.
The one benefit of a proof-of-work system is it makes it harder to pull a man-in-the-middle attack. So the question becomes how difficult is it for a large entity to pull a man-in-the-middle attack? Second, if every single BitShare business with a public face was publishing their view of the consensus block via a SSL connection, then you are still decentralized and not trusting any one entity. What are the chances that Mt. Gox, BitStamp, BitInstant, and Google are all going to lie about their view of the network? Every merchant has a vested interest in both their name and the value of the ecosystem to prevent man-in-the-middle attacks and therefore the opinion of major players is the consensus opinion because these are the guys everyone is trading with. We are not trusting them for the balance and we should be able to independently come to the same conclusion they came to. We only check their 'opinion' to detect a man-in-the-middle attack.
Your feedback took some time and is worthy a tip, send me an address.