:Moderation note: I am open to economic and technical arguments to tweak any aspect of this compromise proposal, but outright dismissal over concerns of game theory will be deleted on sight unless said concerns are accompanied by a suggestion to improve the proposal and overcome those concerns. Constructive feedback, not torpedoes, please.
A static limit is, in my considered opinion, shortsighted. It fails to address the concerns many still have about on-chain transactions becoming economically unviable.
Many argue that it's simply a matter of conspiracy "holding SegWit back", but while it's not without reason, I'm not entirely convinced by that argument and feel that locking in guarantees about the blocksize would hasten the activation of SegWit. [//EDIT December 2017: Almost prescient given BIP91's initial success (despite failing on the second hurdle), heh. ] I maintain that an algorithmic process based on real-time network traffic is far better in every way than the established "clumsy hack" mentality of picking an arbitrary whole number from thin air, kicking the can down the road, waiting for "permission" from the developers and descending into the same stupid war each time the issue of throughput arises in future. Plus, a one-time hardfork is far better than multiple hardforks every time we start nearing another new and arbitrary static limit. And there are no violent swings in fee pressure this way. Everything is smooth, consistent and, for the most part, predictable, which is what we should all want Bitcoin to be.
The proposal gauges fee pressures in conjunction with traffic to determine if an increase, decrease or no change at all is required. Strong consideration has been given to limit increases (and allow decreases) so as not to reach a level where nodes would struggle with bandwidth usage. The condition of fees also helps prevent gaming the system. This latest iteration of the proposal, largely based on
BIP106 includes adjustment to the Witness space to maintain the 1:3 ratio between base and witness. SegWit is a prerequisite for this to be activated:
IF more than 50% of block's size, found in the first 2016 of the last difficulty period, is more than 90% MaxBlockSize
AND (TotalTxFeeInLastDifficulty > average(TotalTxFee in last 8 difficulty periods))
THEN BaseMaxBlockSize = BaseMaxBlockSize +0.01MB
WitnessMaxBlockSize = WitnessMaxBlockSize +0.03MB
Else IF more than 90% of block's size, found in the first 2016 of the last difficulty period, is less than 50% MaxBlockSize
THEN BaseMaxBlockSize = BaseMaxBlockSize -0.01MB
WitnessMaxBlockSize = WitnessMaxBlockSize -0.03MB
ELSE
Keep the same BaseMaxBlockSize and WitnessMaxBlockSize
(credit to Upal Chakraborty for their original concept in
BIP106)
//EDIT: Cheers to d5000 for their proposed average fee adjustment
So in plain English, a tiny,
0.01MB, adjustment to the base blockweight can occur each difficulty period and a proportionate 0.03MB to the Witness space to maintain a 1:3 ratio, but only if:
- SegWit is implemented
- Either there are sufficiently full blocks to justify an increase to the blockweight, or sufficiently empty to reduce it
- There are more fees generated in the latest difficulty period than the average over the last 8 periods to permit an increase, but the blockweight can be reduced regardless of fees, which will deter gaming the system
Mathematically, assuming an average block time of ~10 minutes, there are a maximum of ~104 difficulty adjustments over a 4 year period, so even if there was a .01 MB increase at every difficulty re-target (the chances of which are negligible), the base blockweight would still only be ~2.04 MB after 4 years.
Is this a compromise most of us could get behind?