except grudgingly with OP_RETURN since it puts a strict limit of 80 bytes on it
This limit is not so strict. If you make non-standard transaction, you can exceed those 80 bytes. More than that: there are node configuration options, which allows you to change that specific behaviour, so you don't even have to recompile the code to change it in your node.
i want a blockchain where every single transaction is treated the same exact way and no one knows what its purpose is
But you know, that by following this assumption, you would have no Script in your system, and everything would work similar to P2PK, which would be the only address type?
Or rather: the basic system would look like that, and other use cases would require soft-forks, designed in a similar way to Segwit, but instead of being wrapped in a "stack pushes, evaluated into true", would be committed into signatures and public keys instead, like in Taproot.
therefore, they should all be in the UTXO set.
If they are spendable, then it is not a big deal. Worse, if they are non-spendable, but you cannot prove it. And in general, if you have a full Script support, then you cannot write a program, which will evaluate it, and prove it in finite time, it is called the halting problem:
https://en.wikipedia.org/wiki/Halting_problemi'm not sure why people complain so much about the size of the UTXO set
Because it is the main thing, which decides about the size, which is required for running pruned node. Then, you have only the last N blocks, but also the full UTXO set. Which also means, that if such set is smaller, then it is easier to manage it, and then more people can afford running a full node with enabled pruning.
Another story is Initial Blockchain Download, where if you want to simplify it, then you can use models like "assume UTXO", which is based on the UTXO set. And the smaller it is, the faster that kind of synchronization can be performed.
the blockchain is way bigger than the UTXO set so if they can store that then they can surely store the UTXO set too
Many people switched from full archival nodes into pruned nodes, when the chain became bigger. If the UTXO set will be flooded, then those clients will switch into some even weaker model. And that is something you don't want, because after more and more simplifications, you can reach a moment, when nobody can access the full history anymore. And if there will be more and more spam, then we will enter those times sooner, than we should.
if they don't have enough RAM then maybe bitcoin can be rewritten to take advantage of not needing to store the entire UTXO set in RAM all at once
Of course, many optimizations are possible. But first, you will see some crashing nodes, and frustrated users, and then some fixes, and changes in the codebase. Writing software is hard, and there are many things, which are open, and not covered by any consensus rules (for example: the total size of the UTXO set is currently unlimited). However, in our universe, everything is finite, just by the resources of those, who run their nodes. And if you abuse those resources, then you may end up in a situation, where there would be no volunteers, willing to provide their services for free, and you will end up with a network, which consists of only weak clients, unable to access the full history anymore.
but that's not a reason to complain about the UTXO set size, if there's a technological problem or algorithmic issue about how the UTXO set is stored, processed and used by software then it's a software problem
But the problem is, that we have many software problems. And sometimes, there are more issues than people, willing to fix them. And then, guess what you have: the status quo. The "nothing will be changed" assumption, the crashing clients, and nobody, willing to clean up that mess. Just like with Ordinals: there were not enough people, willing to fix the problem, and it became worse over time, and we reached "status quo" in that matter.
So, to sum up: you don't want to get "status quo", when it comes to the UTXO flood. Because then, you may enter the time, when you need coding skills to use the network properly, just like it is the case with some altcoins, when their creators lack needed skills and competence to maintain it.
i wouldn't want a transaction of mine to not be in the UTXO set
Even if it would be cheaper, and the coin flow would match exactly, what you broadcasted to the network? And even if it would be possible to prove, that your transaction was present in the chain, and see the exact location?
but it really doesn't have a purpose other than to try and persuade people to not use the UTXO set for some of their transactions
Not really. If you follow the whole idea of OP_RETURN, before it became, what it is now, then you will understand, that it is directly related to the "return" keyword in C++. And that keyword alone, when used in the real C++ code, has a lot of use cases. If you have a function with many "return something;", then it is perfectly valid, and this is what Satoshi wanted to achieve. The main problem was "OP_TRUE OP_RETURN", but there are other ways to fix it, than making a given output invalid.
Also note, that there is a reason, why Taproot has OP_SUCCESS opcodes, but not "OP_TRUE OP_RETURN" instead, even if the true meaning of "return" keyword is just that. Not to mention, what can happen, if you combine OP_CODESEPARATOR with properly implemented OP_RETURN, and also in Taproot, you cannot have "OP_IF OP_SUCCESS OP_ENDIF", which could be sometimes useful, but instead, it is always spendable, because of scanning for OP_SUCCESS occurrences, and making the script immediately valid, even if that opcode can be reached only inside some condition.
Because I want my transaction always being stored by everyone.
On the other hand: do you want to always store everyone else's transactions? Because if not, then we are going back to the free rider problem:
https://en.wikipedia.org/wiki/Free-rider_problemA single transaction per block. that would be a Denial of Service attack.
Or, in other words: a forced soft-fork:
https://petertodd.org/2016/forced-soft-forks#radical-changesBut if his transaction sizes were limited in size such that they could only take up 80 bytes each, it would be alot harder for him to do.
Why? Each miner can simply decide to include only a coinbase transaction, and nothing else. In that case, other users cannot get their transactions confirmed, if the attacker has 51%, no matter, how big is that block, or what it contains, if it is fully controlled by the attacker, just like in signet.