Having spent a long time reading all of "How a floating blocksize limit inevitably leads towards centralization" and related threads, I want to summarise the proposals for introducing adaptive maximum block size to the protocol.
For focus's sake, please only join in with this thread if you are willing to operate under these assumptions for the scope of the thread:
- in the long term, incentive to secure bitcoin's transaction log
must come from transaction fees - Bitcoin should have potential scalability for everyone on the planet
to make regular use of it, but equilibria are required to
ensure scaling happens at a manageable rate
I appreciate that not everyone shares those assumptions, but please keep this thread for a discussion that locally accepts them for the sake of the discussion!
The idea of an adaptive algorithm has to be to make use of a negative feedback loop to achieve an desired equilibrium.
The canonical example of this in bitcoin is the way the difficulty of solving a block is related to the frequency of recently-found
blocks, producing an equilibrium around the interval of one block every ten minutes.
In the rest of this post I evaluate several proposals based on the equilibrium they are attempting to establish, making my own thoughts on the matter clearer towards the end.
First up we have:
How about tying the maximum block size to mining difficulty?
[...]
This provoked a fair bit of discussion. The idea seems to be that miners will quit if it's no longer profitable for them to maintain
the full transaction log and mine. It is unclear what equilibrium objectives this approach has, and I find it difficult to intuitively
say what equilibriums, if any, would be achieved by this adaptation.
Next up we have:
[...]
Second half-baked thought:
One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.
Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.
[...]
It should be clear that this does not scale very far. The cost is linear, and is predetermined per unit of data. By the time you've reached blocks of 8 MB, transactors are spending 100% of the monetary base per annum in order to have the transaction log maintained. The equilibrium is that block size is simply limited by the high cost of each transaction, but the equilibrium is not precisely statable in terms of desirable properties of the system as a whole.
We also have:
[...]
How about
*To increase max block size by n%, more than 50% of fee paying transactions(must meet a minimum fee threshold to be counted) during the last difficulty window were not included in the next X blocks. Likewise we reduce the max block size by n%(down to a minimum of 1MB) whenever 100% of all paying transactions are included in the next X blocks.
[...]
The issue with this is how you set the minimum fee threshold? You could set it to a value that makes sense now, but if it turned out that bitcoin could scale really really high, the minimum fee threshold would turn out to be too high itself, and is not itself adaptive. This approach is going along the right lines, but it doesn't seem to stem from an quantitative, fundamental objective to do with the bitcoin network.
Also:
[...]
Here's yet another alternative scheme:
1) Block size adjustments happen at the same time that network difficulty adjusts
2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.
3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.
[...]
This doesn't take the opportunity to link the size to transaction fees in any way, and seems vulnerable to spamming.
Then there's:
Since this locks in a minimum fee per transaction MB, what about scaling it with the square of the fees.
For example, every 2016 blocks, it is updated to
sqrt(MAX_BLOCK_SIZE in MB) = median(fees + minting) / 50
[...]
Same objection as to the 50 BTC per MiB proposal. It just doesn't scale very far before all the value is being eaten by fees.
This time we get to around 64 MiB per block before 100% of the monetary base is spent per annum on securing the transaction log. Again, it is unclear what the objectives are for any equilibrium created.
Nearly finally, we have people suggesting that max block size doubles whenever block reward halves. No dynamic equilibrium is created by doing this, and it's pure hope that the resulting numbers might produce the right incentives for network participants.
It seems to me that the starting point should be "what percentage of the total monetary base should transactors pay each year, in order to secure the transaction log". This is a quantity about the system that has economic meaning and technical meaning within the protocol. It basically keeps its meaning as the system grows. Why transactors? Because in the long term holders pay nothing (the initial-distribution schedule winds down), and thus transactors pay everything. That seems immutable. Why per year? This makes it easy for humans to reason about. Annual amounts can be converted to per-block amounts once we're done setting the value.
Thus, this:
1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)
2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.
3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.
is a pretty decent starting point. It allows unlimited growth of the maximum block size, but as soon as transaction fees, which are what secure the transaction log, dwindle below the threshold, the maximum block size shrinks again. Equilibrium around a desirable property of the system as a whole! Easily expressed as a precise quantitative statement ("long term, transactors should pay n % of the total monetary base per annum to those securing the transaction log, so long as the
max block size is above its floor value").
The exact proposal quoted above sets the amount at 6.25%-12.5% p.a., whereas I intuitively think it should be more like 3%. I would probably also just state it in terms of the mean transaction fee over the last N blocks, as that is more directly linked to the objective than whether half of transactions are above a threshold. 3% p.a. would work out at a mean of 12 BTC per block, so would be 0.0029 BTC per transaction to lift the block size off its 1 MiB floor. Seems about right.
My full reply and subsequent discussion with misterbigg is at
https://bitcointalksearch.org/topic/m.1546674 .
Hope that's a useful summary of all the adaptive algorithms proposed so far, even if you don't agree with the assumptions or
my conclusions.