tl; dr: I propose replacing the hard cap on the data size of a block (1MB, 20MB, or whatever it is) with an elastic one, where resistance to larger blocks accumulates progressively. Miners will be required to pay a superlinear penalty for large blocks, to be paid into a rollover fee pool. This will greatly increase Bitcoin's robustness to a situation where the block cap is approached, and allow a healthy fee market.BackgroundIn these days there is heated debate about whether the block size limit of 1 MB should be relaxed. Many arguments are being discussed, but the most common one is the classic tradeoff:
Bigger blocks -> Harder to run a node -> Less nodes -> More centralization
Smaller blocks -> Less payments can be done directly as transactions on the blockchain -> More reliance on payment processing solutions involving 3rd parties -> More centralization
Most agree that there is some optimal value for this tradeoff, and the debate is about what it is, how to determine it, how to make decisions revolving it, etc. And most assume that if the blocks are too small, we'll get a "heads up" in the form of increasing tx fees.
However,
Mike argues, and I tend to agree, that this is not how it will work. Rather, if the block limit is too small for the transaction demand, the Bitcoin network will crash. The gist of his argument, as I see it, is - market forces
should find a fee equilibrium where transaction supply matches demand, whatever they are. However, market forces don't react fast enough to fluctuations, creating buildups that technically can't be handled by the software.
Mike argues also that since we don't know when we'll reach the limit - and once we do, we will have catastrophic failure without warning - we must hurry to raise the limit to remain safe. That part I have an issue with - if Bitcoin can't gracefully degrade in the face of rising transaction volume, we will have problems no matter what the current block size limit is. We should instead focus on fixing
that problem.
In this post I will introduce a protocol modification that might eliminate this problem. I will describe the suggestion and analyze how it can help. With it in place, we no longer run the risk of a crash landing, only rising fees - giving us an indication that something should be changed.
And then, we can go back to arguing what the block size should be, given the tradeoff above.
Rollover fee poolThis suggestion requires, as a prerequisite, the use of a rollover fee pool. I first introduced the concept
here - this post is worth a read, but it used the concept for a different purpose than we have here.
The original idea was that fees for a transaction will not be paid to the one miner of the block which includes it - rather, fees would be paid into a pool, from which the funds will gradually be paid to future miners. You could have, for example, that in each block, the miner is entitled to 1% of the current funds in the pool (in addition to any other block rewards).
In the current suggestion, we will use such a pool that is paid out over time - but it will not be the users who put money into the pool. Transaction fees will be paid to the miner who found the block as normal.
Edit: Saying again for clarity - In the current proposal, fees from transactions will be paid to the miner of the current block instantly and in full. The miners can't gain anything by accepting tx fees out of band. The one paying into the rollover pool is the miner himself, as explained below.
Elastic block capThe heart of the suggestion is - instead of forbidding large blocks, penalize them. The miner of a large block must pay a penalty that depends on the block's size. The penalty will be deducted from the funds he collects in the generation transaction, and paid into the rollover pool, to be distributed among future miners. If the penalty exceeds the miner's income of tx fees + minted coins + his share of the current rollover pool, the block is invalid.
This requires choosing a function f that returns the penalty for a given block size. There is great flexibility and there's little that can go wrong if we choose a "wrong" function. The main requirements are that it is convex, and has significant curvature around the size we think blocks should be. My suggestion: Choose a target block size T. Then for any given block size x, set f(x) = Max(x-T,0)^2 / (T*(2T-x)). (
graph)
This will mean that there are no penalties for blocks up to size T. As the block size increases, there is a penalty for each additional transaction - negligible at first, but eventually sharply rising. Blocks bigger than 2T are forbidden.
AnalysisI assume we do want scarcity in the blockchain - this prevents useless transactions that bloat the blockchain and make it harder to run nodes, and incentivizes users to fund the Bitcoin infrastructure. A block size limit creates scarcity - but only if there ever really are situations where we reach the limit. But as things stand, reaching the limit creates technical malfunctions.
Mike calls the problem "crash landing", but to me it's more like hitting a brick wall. Transaction demand runs towards the limit with nothing slowing it down, until it painfully slams into it. One minute there is spare room in the blocks, and no reason to charge tx fees - and the next, there's no room, and we run into problems.
If more transactions enter the network than can be put in blocks, the queue will grow ever larger and can take hours to clear, meaning that transactions will take hours to confirm. Miners can use fees to signal to the market to ease up on new transactions, but the market will be too slow to react. First, because the software isn't optimized to receive this signal; and second, because transaction demand is inelastic in short time scales. If, over sustained periods of time, transaction fees are high, I will stop looking for places to pay with Bitcoin, I will sign up to a payment facilitation service, etc. But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.
Enter an elastic cap. When tx demand exceeds the target block value, a backlog doesn't accumulate - miners simply include the extra in the block. They will start to require a bit of extra fees to compensate for the penalty, but can still clear the queue at the rate it is filling. If the incoming tx rate continues to increase, the marginal penalty miners have to pay per tx will increase, so fees will become higher - but since the process is more gradual, clients will be in a better position to understand what fee they need to pay for quick confirmation. The process will resemble climbing a hill rather than running into a brick wall. As we push the limits, the restoring force will grow stronger until we reach an equilibrium.
On longer time scales, the market will have an understanding of the typical fees, and make decisions accordingly. It can also analyze times of the day/week when fees are lower, and use those for any non-time-sensitive transactions.
With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability. With an elastic cap, a longer queue cause transaction fees to be higher, encouraging miners to include more in a block, increasing the speed at which the queue clears. This is a stable equilibrium that prevents the queue from growing unboundedly, while always maintaining some fee tension.
Incidentally, the current system is a special case of this suggestion, with f(x) = If (x
The way forwardI believe there is very little downside, if any, to implementing this method. It can prove useful even if the crash landing scenario isn't as grave as Mike suggests. And the primitives will be handy for other possible protocol improvements. I believe it is an essentially simple and fully fleshed-out idea. As such, I hope it can be accepted without much controversy.
It is however a major change which may require some significant coding. When discussing this idea with Gavin, he explained he'd be in a better position to evaluate it if given working, testable code. Writing code isn't my thing, though. If anyone is interested in implementing this, I'll be happy to provide support.
Related workA similar system exists in CryptoNote, see e.g. section 6.2.3 of
https://cryptonote.org/whitepaper.pdf.
Greg Maxwell proposed a similar system with variable mining effort instead of reward penalty - see e.g.
http://sourceforge.net/p/bitcoin/mailman/message/34100485/ for discussion.
This proposal is not directly related to floating block limit methods, where the limit parameters are determined automatically based on recent history of transaction demand.