Do you have a citation for that?
https://lightning.network/lightning-network-paper.pdf I have not seen such a thing, although I admit I that have not followed LN development closely as I should; I have been more checking into it from time to time, waiting for a few particular parts of it to mature.
We've all been waiting for that since Bitcoin's blocks became full in 2016
By the way, what use cases would you like to see? If we get to the point of high-value transactions and upper-layer settlements on the blockchain, small-value transactions off-chain, and RGB/Spectrum doing smart contracts, tokens, NFTs, DEXes, etc., then what more is needed? I am sincerely curious. On this point, off the top of my head, all that can I think of are some smart contract edge cases where the contract code must be available for execution without parties being online; that’s not a case for bigger blocks, but rather, for one of the upstart competitors to ETH2.
Well, some of those. Others of those I think are just fads and will die out naturally. Satoshi himself also mentioned several options that were very interesting. I will just put in that I do agree that there is an apparent concern that unbounded block sizes could lead to bloated blocks. I don't think that's as likely a problem as you probably do but it is one that should be addressed (note, however that most looking for bigger blocks would have been happy with *incremental* increases. Probably more on that later.). I am also not really that interested in complex eth-style contracts and agree that eth is fine pursuing that endgame (though doubtless some do want that in Bitcoin). Note that high transaction fees have some quite serious security implications, including people relying on custodial wallets, added friction on mixers and making wallet management tasks expensive. It also tends to make dust too expensive to aggregate/spend (meaning UTXO bloat, something you have expressed concern about). And, of course, there's layer 2 solutions which typically require a couple of on-chain transactions somewhere along the line.
As I noted in one of my earlier posts, Bitcoin transaction demand is high enough that I don’t even think that a blocksize increase would meaningfully increase capacity. And it only grows! How do you propose that, say, twice a very low TPS would make any difference, when we need orders of magnitude higher TPS to cover all of the small-value tx use cases with mass adoption? Bigger blocks wouldn’t get us to 10k TPS and up, unless you think that running a node should require a supercomputer hooked straight into an Internet backbone link. (Or unless you want to make more radical changes to Bitcoin’s architecture, other than increasing the blocksize.)
I mean, sure it would (or at least a good part of the way there - just saw you wrote 10
k), just as every capacity increase for computing in the past has improved things. We're no longer on acoustic couplers, 10 base T is now only a few components in the bottom of my junk drawer to me. You mention Raspberry Pis but Bitcoin was already two years old when the Raspberry Pi launched and now is up to the RPi 4 with commensurate increases in processor speed and storage and communication speed. An order of magnitude (10MB) would not be out-of-line at this point IMO. And RPi's are a pretty bad baseline. I've been throwing out MB/CPU combos that out-perform RPis. RPis have their place but any application as a Bitcoin node is for novelty purposes only (and yes, I've done it. It was not a pleasant experience even in 2012)
(Hah, $281.33. Cute.)
I guess the question is, perhaps, what is the "right size" for the block size. Well, not for everyone. Some believe that changing the protocol in any way is a fork and thus not Bitcoin. There is, perhaps some truth in that though it should be noted that Bitcoin *has* already forked the protocol several times in the past and that this ideal is one that CSW holds for his own shitcoin (so fine company there). But for those who are more open to discussion, it is one to be had. So let's start with "Why 1MB?". Satoshi apparently was concerned with spam so added it as a limit that was small enough to prevent horrible network disruption but large enough (approximately 100x the block size at the time) that it would not affect normal operation. He even suggested a way that it could be removed. Now, it's fashionable to disparage Satoshi's opinion when it's convenient but then why should we give any more weight to his arbitrary choice of 1000000 bytes? As far as I know, only Luke-Jr has seriously suggested that blocksize should be smaller.
I feel we have a lot of common ground outside of this issue so I think I can safely talk about the free market with you here. Fees buy space on the blockchain. Normally, in a free market, if demand for something goes up, this causes rising prices which encourages more supply of the thing in demand. The problem with the block size limit as it stands is that there is no way to increase that supply. Block space is effectively provided by a mandatory cartel where supply is fixed. Can you imagine if Henry Ford would have never been able to expand beyond his original factory? If, when GM came onboard (I may have my chronology incorrect there), he had to split his ability to produce vehicles with them? Can you imagine how expensive cars would be? The non-monopoly price for a transaction would be the nominal cost to mine a fraction of a block and store the transaction (a piddling number of bytes) with a multiplier because miners often don't win any given block and then a small amount as profit. Realistically, we're talking sub-10c. More indicates a market failure and when we're up into tens or twenties of dollars, it's an indication something is seriously amiss (if we're talking free markets). Now, it can be argued that this aspect of Bitcoin is not supposed to be free market (after all, the 21 million cap isn't really either) but then let's not be talking about free market forces.