As I pointed out on the mailing list observing the chain for matched orders tells you nothing about whether or not those orders were ever published; that is, whether or not they're entirely fake and don't represent actual market depth.
As I said in the text you quoted, a filtered average over previous block solves this. Unless you are assuming the attacker has majority hashrate, in which case you're screwed anyway.
An attacker who wanted to give a false impression of market
depthactivity and/or prices could simply publish completed transactions to himself on the blockchain and have nothing to do with the mining process. These transactions would never be propagated as unfilled orders on the network, only as completed transactions. Forcing unfilled orders to propagate through the network, and be published in the blockchain, ensures that these orders are real, as they could be matched and filled by anybody.
Exactly! I'm glad you realize this.
Thanks for the tip!
I'm still working on figuring out your earlier riddle. The question "Does data need to be stored forever to prove it was published once?" seems like it might be a ninja question. First off, I'm going to assume that you're not trying to play a trick with the wording. The above sentence can be diagrammed different ways; what does the "once" apply to? I'm going to assume that you mean you want to be able to "prove" it forever and not just once. "Forever" means "at any arbitrary time in the future". "Once" denotes the original date of publishing.
My answer is "Of course it does!" Because the hash was created to prove something, and you have to have that "something" there in front of you in order for the hash to be a valid proof (so it has to be stored somewhere permanently, although of course this doesn't have to be the blockchain). The hash by itself doesn't prove jack. It's just a series of random letters and numbers that might or might not be connected with anything. A hash-proof is a two-way street; the hash validates the "something", and the "something" validates the hash.
EDIT: So any time you want to verify, you have to have both the original "something" and the hash, simultaneously. The "something" can be published first and the hash published later, or the hash can be published first and the "something" published later. But both must be accessible simultaneously in order to verify the hash.
For P2SH^2, the published hash is simply the hash of another hash. Plain P2SH assumes that the "something" to be an original value with meaning (although I remain somewhat unclear as to what that actually is in the case of normal P2SH -- I know next to nothing about Bitcoin scripts), while P2SH^2 requires the "something" to be a hash of an original "something", although again it could simply be a random set of letters and numbers, and the original "something" never existed. I have already shown how P2SH^2 could be attacked for arbitrary data storage, in a way which is not detectable or blacklist-able, albeit at potentially high transaction costs as well as requiring a tiny amount of hashpower. With more hashpower and/or reusable addresses (i.e. no fear of blacklisting), the attack becomes cheaper in terms of transaction costs.
As I said in the text you quoted, a filtered average over previous blocks solves this. Unless you are assuming the attacker has majority hashrate, in which case you're screwed anyway.
An attacker who wanted to give a false impression of market
depthactivity and/or prices could simply publish completed transactions to himself on the blockchain and have nothing to do with the mining process. These transactions would never be propagated as unfilled orders on the network, only as completed transactions. Forcing unfilled orders to propagate through the network, and be published in the blockchain, ensures that these orders are real, as they could be matched and filled by anybody.
Pay attention to the bolded words.
How do you propose to filter? Such an attacker could essentially "own" the market (90%+ of transactions) and nobody would be able to tell. He could conduct the attack with numerous nodes, numerous single-use addresses, numerous IP's, numerous physical locations. And it would be pretty cheap to do so.