Its just an informed guess on my part derived from writing the Gox stream to a RDBMS. When you say "producing 600 unique..." it sounds like you are telling me about reads. These can be cached, etc and so respond much faster... I can also run complex queries rapidly across the Gox data I've written to my RDBMS.
Not reads. Writes (update & inserts). I use the word "unique attribute" to ignore the composite keys.
But transactional writes cannot be cached -- by the nature of a transaction they need to be written to persistent backing storage before they are acked. And of course its very important to commit a trade as a transaction so there is no chance that person A spends his $ or BTC but person B does not receive it. So essentially the DB performance is chained to the latency speed of the disk -- the time to "seek" and write a few bytes. But because the time to seek and write a few bytes is not much > then the time to seek and write 10k bytes, if the application chunks multiple operations into a single txn you get massive speedup. So even with writes its not the RDBMS, but HOW it is used.
Agreed. But any half arsed SATA (or probably IDE) drive should not blink at the tps produced by Gox.
But it all depends on the db logical-to-physical mapping...
Anyway, this is just a theory... but just watch Gox unwind a big market buy, ticking through 100's of .1 btc asks at a human readable speed and it becomes pretty obvious that each trade is handled individually. This causes all kinds of problem with efficient price discovery, like "fake" walls. Basically, it looks to me like a bot can use a bunch of .00X btc asks to slow down a big buy and use the time to take down the wall before the buy ticks up to it.
I reckon it is the front end matching algo implementation that is screwing performance. I hazard a guess they are suffering lock/threading issues.