OP, Weren't you vehemently against raising the limit few years ago? I think I remember a lot of intellectual technical discussion around here involving you, gmaxwell and others regarding this matter. Around v0.0.9?
I don't recall that being the case but I could be wrong (on other issues my opinion has changed over time
). I do recall around that time (I assume you mean v0.9.0 not v0.0.9) the backlog of unconfirmed transactions was growing. The reason is that while the soft limit had been raised the default value hard coded in the bitcoind was 250KB and most miners never changed that. The only miners which were targeting a different size were targeting smaller not larger blocks. Even as the backlog grew they felt no urgency in changing it. Some uninformed people advocated raising the 1MB cap as a way to "fix" the problem. I tried (and mostly) failed explaining that miners could already make blocks at least 4x larger and were opting not to. Changing the max only allows larger blocks if miners decide to make them. The developers ended up forcing the issue by first raising the default block size so that it didn't remain fixed at 250KB and them removing the default size all together. Today bitcoind require you to set an explicit block size. If you don't set one you can't mine.
I am personally very much against hard forks of such. However I am all in for new crypto with new parameter that considers how a previous crypto has lagged behind. From my point of view a hard fork with a lot of publicity to adhere to and "update" to keep up with is simply an act of how a few control the mass. Whether it is for a good reason or bad or whatever, It really breaks the original principles of decentralization and fairness.
I have to disagree. Bitcoin is a protocol and protocols change overtime. Constantly reinventing the wheel means a lot of wasted effort. You end up with dozens of half used systems instead of one powerful one. Look at TCP/IP or even HTML. Yeah they have a lot of flaws, they also have a lot of early assumptions backed in which produce inefficiency. Evolution is also hard. Look at the mess of browser standards or how long the migration to IPv6 has taken. Despite the problems the world is better for having widely deployed protocols in favor on constantly starting over.
In the crypto space however 'starting over' means a chance at catching lightning in a bottle and becoming insanely rich. That has lead to a lot of attempts but not much has come from it so far. Alternates are an option but they shouldn't be the first option. The first option should be evolution but if a proposal fails and a developer strongly believes that in the long run the network can't be successful without it then it should be explored in an alternate system. There are some things about Bitcoin which may be impossible to change and for those an alternate might be the only choice. The block size isn't one of them.
Jumping specifically to the block size. The original client had no* block size limit. The fork occurred when it was capped down to 1MB. The 1MB has in some circles become a holy text but there isn't anything which indicates it had any significance at that time. It was a crude way to limit the damage caused by spam or a malicious attacker. It is important to understand that the cap doesn't even stop spam (either malicious or just wasteful). The cost of mining, the dust limit and the min fee to relay low priority txns are what reduced spam by making it less economical.
The cap still allowed the potential for abuse but the scope of that abuse is limited. If 10% of the early blocks were maxed out it would still have added 5GB per year to the blockchain size. That would have been bad but it would have been survivable and would have required the consent of 10% of the hashrate. Without the cap a single malicious entity could have increased the cost of joining the network by a lot more. Imagine if before you even knew about Bitcoin and the only client was the full node that to join the network would have required downloading 100GB or 500GB. Would Bitcoin even be here today if that had happened?
Satoshi stated in the white paper that consensus rules can be changed if needed. He also directly stated that the block limit was temporary and could be increased in the future and phased in at a higher block. Now I am not saying everything Satoshi said is right but I have to disagree that the 'original principles of decentralization and fairness' preclude changes to the protocol. Some people may today believe that any change invalidations the original principles but that was never stated as an original principle.
The protocol has always been changeable by an 'economic majority' and the likewise that majority can't prevent the minority from continuing to operate the unmodified protocol. It is impossible to design an open protocol which can't be changed however as a practical matter a fork (any fork) will only be successful if a supermajority of users, miners, developers, companies, service providers, etc support the change.
There are four universal truths about open source peer to peer networks:
a) It is not possible to prevent a change to the protocol.
b) A change in the protocol will lead to two mutually incompatible networks.
c) These parallel networks will continue to exist until one network is abandoned completely.
d) There is no mechanism to force users to switch networks so integration is only possible through voluntary action.
There is a concept called the tyranny of the minority. It isn't possible for a protocol to prevent changes without explicit approval of every single user but even if it was that would not be an anti-fragile system. A bank could purchase a single satoshi, hang on to it, and use that as a way to prevent any improvements to the protocol ensuring it will eventually fail. The earliest fork fixed a bug which allowed to the creation of billions of coins. There is no evidence it has universal support. The person who used it to create additional coins saw the new version erased coins that the prior network declared valid. Still a fork is best if either a negligible number of people support it or a negligible number of people oppose it. The worst case scenario would be a roughly 50-50 split and both forks continuing to co-exist.
The original client did have a 33.5MB constraint on a message length. It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message. There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol. It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated. Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node. Sanity checks are always a good idea to remove edge cases.