Scaling is directly related with compression. If you can use the same resources to achieve more goals, then that thing is "scalable". So, if the size of the block is 1 MB, and your "scaling" is just "let's increase it into 4 MB", then it is not a scaling anymore. It is just a pure, linear growth. You increase numbers four times, so you can now handle 4x more traffic. But it is not scaling. Not at all.
Scaling is about resources. If you can handle 16x more traffic with only 4x bigger blocks, then this is somewhat scalable. But we can go even further: if you can handle 100x more traffic, or even 1000x more traffic with only 4x bigger blocks, then this has even better scalability.
I think you support small block size, because I'd read you in the bitcoin-dev mailing list, and you're all small blockers there, IIRC.
Yes, for example here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019988.htmlAs you can see, I wrote about commitments long time ago, and today, I still think they are better than legacy OP_RETURNs, but instead of creating a separate output, they can be moved into R-value of the signature, then it is even better, because then it can be used on every address type.
I highly respect the endless hours you've spent, discussing on that mailing list. I think you can enlighten us.
Well, everyone can post on the mailing list. The main difference is that you have to wait some time for publication, so your post is not visible immediately, but is first read by some human, and then manually accepted. But besides that, it is similar to forums, and it doesn't matter that much, because many times I write some posts on disk, and they are sitting there, unpublished, and wait for future input. If after some days or weeks, they are still good enough to be published, then they are released.
By the way, in the current queue, I have some notes about RIPEMD-160 (and how it is related to SHA-1 or other hash functions), but I have to implement it from scratch, to write properly about it. And I guess Garlo Nicon is thinking about DLEQ, for example in the context of secp160k1. But all of that is work in progress.
And it's worth to mention usage of CPU instruction (such as SSE2) also somewhat allow more scaling.
In general, yes, but it depends, how things are internally wired. A lot of optimizations are based on parallelism, and if you have to do some sequential hashing, then it equally hurts all full nodes. Which means, that if some transaction is complex, then not only it is a bottleneck for mining pools, but it is also a bottleneck for non-mining nodes, used to propagate transactions in the network.
Some good video about x86 internal assembly, and why we are doomed to stick with some problems for years, even if we switch to another architecture:
https://www.youtube.com/watch?v=xCBrtopAG80 (by the way, I expect sooner or later, the Script will be also splitted into smaller parts, like micro-ops, and it will be possible to deal with them more directly than today, so maybe the cost would not be measured in "satoshis per bytes").