There's the clear inequity under current rules that one byte of data in an OP_RETURN output is worth 4WU and one byte in witness data as exploited by inscriptions is only 1WU.
Guess what: on the mailing list, there was a topic about exactly this issue, and it was continued on Delving Bitcoin:
https://delvingbitcoin.org/t/bug-spammers-get-bitcoin-blockspace-at-discounted-price-lets-fix-it/327But I think it won't be fixed, because of that line of thinking:
The byte size of transactions in the P2P protocol is an artifact of the encoding scheme used. It does matter, because it directly correlates with bandwidth and disk usage for non-pruned nodes, but if we really cared about the impact these had we could easily adopt more efficient encodings for transactions on the network or on disk that encodes some parts of transactions more compactly. If we would do that, the consensus rules (ignoring witness discount) would still count transaction sizes by their old encoding, which would then not correspond to anything physical at all. Would you then still say 1 byte = 1 byte?
And in general, I agree with that statement, but I also agree, that in the current model, not all bytes are counted properly. For example: there is an incentive to send coins into P2WPKH, but spend by key from P2TR. However, if you count the total on-chain footprint of P2WPKH, and compare it with P2TR with key-path spending, then P2WPKH is cheaper to send to, but it takes more on-chain bytes (and the cost is just moved to the recipient, so it is cheaper for the sender).
Also, if we would optimize things, and represent them on-chain differently, for example by saving raw public keys, and just using a single byte to indicate, that "this should use old DER encoding", and "this pubkey is wrapped in P2SH", then the whole chain could probably be much smaller, than it currently is. However, as compression is a no-fork, it can be always applied, so the only problem is standardizing the data, so then you can rely on other nodes, getting exactly the same results, and compressing data with the same algorithms. So, it is all about making "ZIP format for blockchain data": it is easier to send ZIP file, if you can unzip them in the same way on both computers. The same is true for historical blockchain data (and of course, some custom algorithm would be more effective, than just zipping it, because it could take into account, that you have to compress for example secp256k1 points, so it could efficiently use for example x-only-pubkeys, without having to build a weird auto-generated "dictionary" for that).
I've certainly seen some developer discussion relating to the standardisation of data storage, but I don't see any particular push to restrict it completely.
Well, if you want to simplify the current model, then standardisation is the first step in the right direction. And then, if you have for example some widely-deployed model, where you have a huge table of all public keys, which appeared on-chain, then you can generalize it, switch to a different model (utreexo), or view the scripting language from a different perspective:
https://delvingbitcoin.org/t/btc-lisp-as-an-alternative-to-script/682So, I guess that it could be restricted, if the network will be abused too much, but people are currently focused on things, which needs to be done, no matter if you want to restrict it, or not. Because simply "the status quo" is the default, so if you work on a change, that does not require touching consensus, then it can be easily merged. But if you work on some serious soft-fork instead, then you may end up with some working code, that simply won't be merged. And of course, writing some code, which is not consensus-critical is still needed, and is often required as a dependency to your soft-fork (you need to have standardized data compression, to compress and decompress the chain reliably, and to "undo your pruning" if needed).