Someone else should chime in with the math to work it out explicitly - I know way too little about probability math - but can you see how the probability aspect of it makes hash power much less important than the two-step publish commit?
Well, the assumption behind the 2-step process is that you can't predict who will mine a block a long way in the future. You could equally have your proof contain two transactions in two blocks and insist in the protocol that those blocks always be exactly 100 blocks apart and it'd work the same way, I'd think?
I sat down and (half) wrote a god-damn paper this weekend analyzing this stuff properly. I'll finish it up and make it public if anyone is actually interested, but the process absolutely convinced me that the two-step tx-in-a-tx solution is the only thing close to feasible that will keep a stable sacrifice market value. (without just sending coins to unspendable txouts of course)
First of all you have to assume that services will pop up to create these sacrifices for you. Thus you need to assume the sacrifice is being created by the entity with the largest single concentration of hashing power, and that's likely to be in the range of 10% to 30%.
Secondly because of that, for any mining-fee-based sequence, the first transaction is always nearly free. You can always mine it yourself by waiting a bit, and the only cost is the approximately 1% or 2% probability that the block gets orphaned and another miner gets the tx fee instead.
This means that asking for "two blocks n apart" is at best 50% efficient, because you have to assume the first block was self-mined. Secondly mining is a random process, so asking for exactly 1 block apart or 100 makes absolutely no difference.
It seems like the difficulty of obtaining 5-of-6 blocks in a row or similar is essentially equivalent - you have to dominate the network in either case, or be infeasibly lucky, but you can make a proof faster if you're trying to fill a majority of blocks in a span.
Sure, but this is a supply and demand problem. Let ω be the probability that the block a transaction is in is orphaned, and the transaction is mined by someone else. If we attempt to mine a transaction containing a fee of value V ourselves, the cost to us is then V*ω. Assuming that miners always build on the best known block and don't deliberately attempt to orphane blocks to collect the transactions within ω is probably approximately equal to the orphan rate itself, which appears to be around 1% or so. Also note that for a large miner ω is less than for a small miner.
Now if I control q of the total hashing power my expected number of blocks before I get n consecutive is ((1-q)^(-n)-1)/(1-q) For 10% I need 1 million attempts for n=6, however the number of attempts drops extremely quickly as q and n decrease. Basically that means if there exists q and n such that ((1-q)^(-n)-1)/(1-q)*ω < v/V, where v is the actual value of the sacrifice and V is the face value, you are better off attempting to mine the sacrifice yourself rather than buying it fairly. For q=25% and n=3 this is true for v=V. (remember that you can always finish the other blocks in a sacrifice by the conventional way) Allowing for n-of-m blocks just makes the problem worse, because it effectively increases the apparently hashing power available to the miner. Yet at the same time griefers can do a lot of damage by deliberately excluding your sacrifice transactions if n-of-m is not allowed, and additionally orphans quickly push up the cost of sacrifices. (about 10% extra for n=6 and 1% orphans)
Is this really an issue? Well, I think the the question really is why would you use a sacrifice method where the value is so sensitive to so many variables? In particularly, a method where the actual cost of the sacrifice is inversely and exponentially dependent on the size of the largest miner.
My later tx-in-a-tx proposal is far easier to reason about because the cost is simply bounded by the largest miners hashing power and has no dependence on difficult to measure values like orphan rates. It's just a much better idea, which might explain why it took me 6 more months to think of it...
If the node that's verifying can't see it in the mempool and it's not in a block, then presumably the first transaction could be double-spent. Nodes can sync to each others mempools these days using the mempool command, and I guess in future if we find a solution to zombie transactions living forever it'd be good to make nodes sync their mempools at startup. The current "solution" (I hesitate to use that word) is hardly convincing
But the tx-in-a-tx sacrifice isn't valid until the second tx, the one that actually sacrifices coins, is spent so the tx being in the mempool is a non-issue. Anyway sacrifices need to be provable to SPV nodes who don't care about the mempool.