An idea which has
come up in
the past is that various security model reductions in Bitcoin could be practically secure if we had compact fraud proofs.
Today, for example, SPV nodes validate nothing. So if there is a longer _invalid_ chain, SPV nodes will still follow it. This is a existential risk for Bitcoin in the future if SPV nodes become very very popular and almost no one runs full nodes: At some point running a full node may even be foolish, because you'll reject a chain that the economic majority (running SPV nodes) accepts.
However, if instead it were possible to construct small messages that a SPV node could inspect and then be completely confident that a block broke the rules then security would be restored: An invalid chain would only be accepted so long as all SPV nodes could be partitioned from all honest nodes— there only really need to be one attentive honest node in the whole world to achieve security, so long as its truth cannot be censored.
Modifying Bitcoin to make compact fraud proofs possible is fairly straight-forward. The big problem is that from a practical engineering perspective this idea is very risky: Processing a fraud proof is at least as complicated as validating a block and we know that people get block validation wrong already. If these proofs exist then there would be no reason for a miner to ever try to create fraud. As a result they will never be used. If they are never used the implementations will be wrong... if not in the reference then in all of the alternative implementations. Some implementations might be vulnerable to incorrect fraud proofs that break consensus even absent fraud, others might fail to reject some kinds of fraud. Basically any inconsistency in this normally dead code is a point an attacker could use to break the network, or it could just cause failure by chance.
Obviously software testing and whatnot addresses this kind of concern in theory. But in practice we already see people writing alternative implementations which are forked by the pre-existing test code. Good testing is only a complete answer if you assume idealized spherical developers. Completely eliminating the possibility of error in these systems via testing is infeasible in any case, because the input space is too large to exhaust.
So in spite of the potential benefits, I think many people have not been super excited about it as a practical idea.
I finally stumbled into a retrospectively obvious way to make this kind of design more safe: You require that all block solutions commit to two distinct candidate blocks. One of the candidate blocks is required to be fraudulent. The fraudulent block is eliminated through a fraud proof. Nodes which do not process fraud proofs correctly will be unable to determine which of the two is the right one.
This wouldn't eliminate all risk, and it has considerable overhead if used completely seriously but it would at least address the concern that the proof code would be completely nonfunctional due to disuse.