Let's be real, 1GB blocks can't be compressed down to 1 MB and that's the problem.
You cannot do that in a general case, for all kinds of transactions. But for specific cases? There is no problem to do that in practice. Because of course, compressing 1 GB of Ordinals could be hard, if you want to preserve that data, and put everything on-chain. But compressing 1 GB of regular transactions, based on public keys alone? This is definitely possible. Why?
1. Public keys can be combined. Schnorr signatures can show you, that you can have R=R1+R2. And in the general case, if you can express something as R-value, then you can use sum to combine it. And then, instead of having 100 different signatures, you can have a single signature, that will be equivalent to 100-of-100 multisig, on a given data.
2. Transactions in the middle could be batched, skipped, or otherwise summed up, without landing in a block. If you have Lightning Network, then you have one transaction to open a channel, one transaction to close it, and in-between, you can have 1 GB, or even 1 TB of transaction traffic, that will never be visible on-chain later, when the channel will be closed.
3. Transactions flying in mempools can be batched by using full-RBF. Which means, if you have Alice->Bob->Charlie->...->Zack transactions in mempools, and they all take 1 GB, then guess what: if you put Alice->Zack transaction in the final block, it could take 1 MB instead, as long as you properly pick the final transaction after batching, and protect users from double-spends. Which means, the whole state of the mempool could be confirmed in every block, even through commitments, long before it will be batched, and confirmed on-chain. That would protect transactions from double-spending, if nodes would handle that properly, and if miners would respect those rules.
Also, I think people should see "the effective UTXO size diff", instead of just seeing the size of transactions in mempools. Which means, people should not see "oh no, our mempool contains 1 GB of transactions, waiting for confirmation". They should see instead: "our mempool contains N inputs and M outputs to be confirmed". Which means, everything in-between could be batched, stripped, and in many cases, all you need is the list of all inputs, which are confirmed, and all outputs, which are created. If you have Alice->Bob->Charlie->...->Zack transactions, then you don't have to include Bob->Charlie transaction into a new block, if Alice->Zack transaction will be included. All that matters, is if the last user can get the proper amount of coins, and if no double-spending is going on in the middle. That's all that matters. Five years later, nobody cares if Bob->Charlie transaction was there or not.
Also, since the number of bitcoin users and number of bitcoin transactions increase, as well as the computer hardware is getting better and cheaper, there is absolutely no argument to say that increase in block size is either bad or unaffordable.
Initial Blockchain Download takes more and more time. Today, fully synchronizing the chain could take one week. Before it took few days, and in the old times, you just needed few hours. And you have to download everything, even if you use pruning. Downloading everything once is still needed. And it can take a lot of time. And that is only getting worse.
Which means, it doesn't matter that you have to download 500 GB. Validation time is what matters the most. If you can download 1 GB in 10 seconds, it means 5,000 seconds for downloading 500 GB. But downloading is not a problem. Validation is. One week validation for 500 GB means around 1 MB data being validated every second. And this is too slow. It doesn't matter that after few hours, you will have the full chain, if you will spend many days validating it. And if you increase it into too huge values, then guess what: if your validation time is around 1 MB per second, it means you can validate 600 MB per 10 minutes. Which means, if you will have 1 GB every 10 minutes, you will never validate it, even if everything will be fully downloaded on your local disk.
If anyone thinks that high fees are okay and people will pay for it, then that person is very wrong. High Bitcoin fees will only make altcoins a better choice and make them more popular.
Two things:
1. On-chain fees will eventually be high, because when block reward will be zero, then fees will be the only reward.
2. It could still be cheap for users, if their low-fee transactions will be batched into high-fee transactions, and then stored on-chain. Which means, if 500 satoshi as a fee is too expensive, then imagine 500 users paying one satoshi each, and having their transactions batched into a single on-chain transaction.
A network of 10,000 independently running nodes is fully decentralized if each node has an equal Proof-of-Work vote (i.e., 1 CPU = 1 vote).
This is never the case, because users have different hardware. And also, their wealth is never equal, there are rich and poor users. Which means, if you try to reach a situation, where every user would have the same CPU power, it is the same as trying to reach a situation, where each user would own the same amount of satoshis. This will simply never happen.
And even if you will have "1 CPU = 1 vote", then still, Alice could have 5 CPUs, and Bob could own 3 CPUs. And then, their voting power will never be equal.
So, there appears to be an extra parameter; the subset of that group which produces the votes.
If that would be the case, then BCH could be "the true Bitcoin" for a short moment, when they had higher hashrate. But still, having enough hashrate is not everything, because you have to also produce blocks in the proper format, accepted by other full nodes.
But surprisingly, Merged Mining could solve that issue, in case of some attack. If we would always trace the strongest chain of SHA-256 headers, and distribute coinbase rewards accordingly, then in case of some attack, if some network has 10% of the global SHA-256 hashrate, then those miners should receive only 10% of the coinbase amount, and the rest should be burned or timelocked into the future. Then, the attackers would not receive more coins on the real network, supported by users, but the attack would be noticed, and the coinbase amount would react to that properly.
If we compare a network wherein a person owns 90% of the total coins in circulation from the beginning versus a network wherein a person releases the cryptocurrency without exploiting the financial advantage, we can sense there is an orders of mangitude difference in centralization.
See? Hashrate is not everything. Even if you have 51%, then still, if you have Proof of Work, instead of Proof of Stake, you still cannot get those coins, unless you trigger a huge chain reorganization. And in that case:
1. It will be noticed by every full node. Some pruned nodes will even require re-downloading the chain, and that would bring a lot of attention, and alert a lot of people.
2. It will require a huge amount of Proof of Work, that could be put instead into producing new coins on top of the chain.
I would not be so quick to call ordinals the issue.
They are, because you cannot compress them that easily. If you have public-key-based transactions, they could be joined. If you have data-based transactions, then you need to reveal that data, and then it is harder to compress, because then you cannot "just add keys, and produce a Schnorr signature out of that".
and LN is not the best answer
Of course, sidechains could be potentially better, if they would be decentralized, and if they would have as strong peg, similar to LN. But still, some sidechain proposals were rejected, which means, a new ones should be made, maybe even no-fork based, if soft-fork ones will not be accepted.
Basically attacking ordinals is trying to prevent murder by restricting guns. people will use other weapons to do it.
Of course. Those "other weapons" could mean "UTXO flood", and for that reason, you don't have any code censoring Ordinals in the official Bitcoin Core release.