Pages:
Author

Topic: Elastic block cap with rollover penalties - page 7. (Read 24075 times)

legendary
Activity: 1484
Merit: 1005
Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

Yes. There is a quadratic penalty imposed for blocks above the median size (with a maximum size of 2 * median(size of last 400 blocks)), with a dynamic, elastic block sizing algorithm. Unlike Meni's suggestion, the reduction in block subsidy is not given to a pool but rather deferred to future miners because the subsidy algorithm is based around the number of coins.

See section 6.2.3 of the CryptoNote whitepaper: https://cryptonote.org/whitepaper.pdf

It was one of the most criticized components by the Bitcoin core developers, but so far testing on the mainnet and testnet has failed to evidence any failures.

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
hero member
Activity: 907
Merit: 1003
It seems there are as many solutions to the block size problem as there are people in Bitcoin.

In my opinion the OP's suggestion seems a bit complicated, although it may be workable. But it is not an immediate solution. As he mentions it hasn't even been coded yet, and he is not a coder.

We are nearly out of time, folks. This is because it takes time for the new version of the software to get adopted by enough people. By Gavin's estimates 6-12 months. And this isn't counting the time to develop and test the new software.

I think the best solution (for sake of simplicity and time constraints) is to upgrade to 20mb blocks now. It's fast to code. It's simple to implement. And it buys us more time. It's not a complete solution in itself, because as we all know the 20mb blocks will eventually get maxed out.

So part 1 of the solution is to increase blocks to 20mb now. And part 2 is to afterward develop, test and implement other things such as Lightning Network, StrawPay (Stroem), side chains and whatever else gets designed. That way we may never need to touch the block-size again.

By doing it this way we have some time to develop these solutions into existence. If we had a fully running Lightning Network/Side Chains/Etc. currently, then this might be a different discussion. But right now they are just notes on paper. And notes on paper aren't going to do much good in 6-12 months when our 1mb blocks get filled.

The bottom line is 1mb is not enough for anything to innovate on top of it.  20mb is really no better than 1mb, except that it buys us some much needed time, and perhaps allows these other options to run where 1mb would be too limiting. So let's fix the block size now so that these other solutions do have some space to operate.

Joseph Poon and Thaddeus Dryja (Lightning Network creators) themselves even stated that the Lightning Network acts as a sort of amplifier for number of transactions on the existing block space. (For example, you might get a 20x increase in the number of transactions allowed in a block, but it still depends on the basic block size as a starting point).

newbie
Activity: 4
Merit: 0
Just a comment on the following:

Quote
With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability.

I don't think you can say that - that would be like saying, queues are never a problem as long as utilization is < 1 (which of course is required for stability). But long queues do in fact develop when utilization is < 1, due to variability in service / arrival times (re: bitcoin, the dominant source of variability is in inter-block times).

As long as there are queues, fee tension will be present, as mempool transactions are largely prioritised by feerate. Empirically, we are observing periods of fee tension (i.e. busy periods, when pools' max block size is reached) quite often these days.

Otherwise I like this perspective on the block size problem (even though I can't really comment on the proposed solution), in particular the observation that in the short term transaction demand is inelastic. (Upon further thought, however, proponents of increasing block size would say that the inelastic demand problem doesn't really apply if the block size cap is sufficiently higher than the average transaction rate.)
legendary
Activity: 1554
Merit: 1021
If T=3MB it's like a 6MB limit with pressure to keep blocks smaller than 3MB unless there are enough transactions paying fees so it's worth including them?

I think T should scale over time as bandwidth is growing. 42 transactions per second is still a low limit for a global payment network.
legendary
Activity: 1960
Merit: 1130
Truth will out!
Finally a new opinion that has nothing to do with the 1 or 20MB blocks debate...



At this point in time I suggest T = 3MB.

Just one question, Meni:
Why do you think that a target block size T (equal to 3MB) would be a good choice?

Thanks!
legendary
Activity: 1106
Merit: 1000
Oh Monero solution to block size limit

Block size limit is not important in the discussion.

The important is

+ Too many people don't understand block size and block size limit is
+ Political discussions: bitcoin for everybody or for elites only (lots of people try to ignore where bitcoin network effect comes from)
+ Group interest
+ Boys crying wolf: Viacoin, Blockstream devs
donator
Activity: 2058
Merit: 1054
Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

In theory you could have the penalty coins vanish here as well, but that has several disadvantages.

So, I support the idea. Just that 2T is a bit high, will that work? I mean, will it be possible to download  a blockchain with 2T blocks in it? A 2T blockchain consisting of smaller blocks is not an issue, but what happens if the download gets stuck DURING the 2T?

Is the Bitcoin client ready to download blocks partially and continue after a crash/shutdown/process kill?  Huh
In case it's unclear, T doesn't mean terabyte. It's just a parameter that can be configured based on our needs. At this point in time I suggest T = 3MB.
legendary
Activity: 1372
Merit: 1014
I think (in contrast to global politics) we should follow the most intelligent people, and Meni is certainly smarter than most people here.

So, I support the idea. Just that 2T is a bit high, will that work? I mean, will it be possible to download  a blockchain with 2T blocks in it? A 2T blockchain consisting of smaller blocks is not an issue, but what happens if the download gets stuck DURING the 2T?

Is the Bitcoin client ready to download blocks partially and continue after a crash/shutdown/process kill?  Huh
legendary
Activity: 1512
Merit: 1012
Still wild and free
Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
donator
Activity: 2058
Merit: 1054
This is a very good, but as you said will require a lot of work, seems like a good portion of BTC's code would need to be rewritten.
I don't think the changes are that extensive. Basically you just need to change the rules for generation transaction validity.

I'd say we're better off just letting BTC croak and eventually once the damage that is caused to public image has faded, transitioning to a better alt will be in the best interest of the whole cryptocurrency community.
Umm... No?


The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
We still need some way to determine the optimal block size, but we have much more leeway. The wrong choice will not cause catastrophic failure, rather gradually increasing fees which will indicate that a buff is needed. The flexibility will make it easier to reach community consensus about changing hardcoded parameters.

Reusing what I wrote to Gavin an a private exchange - I don't believe in having a block limit calculated automatically based on past blocks. Because it really doesn't put a limit at all. Suppose I wanted to spam the network. Now there is a limit of 1MB/block so I create 1MB/block of junk. If I keep this up the rule will update the size to 2MB/block, and then I spam with 2MB/block. Then 4MB, ad infinitum. The effects of increasing demand for legitimate transaction is similar. There's no real limit and no real market for fees.

Perhaps we can find a solution that uses an automatic rule for short-term fluctuations, and hardcoded parameters for long-term trends. If a good automatic cap rule can be found, it will be compatible with this method.


Would this address bandwidth issues that China could suffer from if block size was increased?
Only very indirectly - it can help Bitcoin function with a low cap, hence reducing the need to increase the cap.
legendary
Activity: 2422
Merit: 1451
Leading Crypto Sports Betting & Casino Platform
Would this address bandwidth issues that China could suffer from if block size was increased?
legendary
Activity: 2282
Merit: 1050
Monero Core Team
The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
newbie
Activity: 14
Merit: 0
This is a very good, but as you said will require a lot of work, seems like a good portion of BTC's code would need to be rewritten. I'd say we're better off just letting BTC croak and eventually once the damage that is caused to public image has faded, transitioning to a better alt will be in the best interest of the whole cryptocurrency community.
donator
Activity: 2058
Merit: 1054
tl; dr: I propose replacing the hard cap on the data size of a block (1MB, 20MB, or whatever it is) with an elastic one, where resistance to larger blocks accumulates progressively. Miners will be required to pay a superlinear penalty for large blocks, to be paid into a rollover fee pool. This will greatly increase Bitcoin's robustness to a situation where the block cap is approached, and allow a healthy fee market.

Background
In these days there is heated debate about whether the block size limit of 1 MB should be relaxed. Many arguments are being discussed, but the most common one is the classic tradeoff:

Bigger blocks -> Harder to run a node -> Less nodes -> More centralization
Smaller blocks -> Less payments can be done directly as transactions on the blockchain -> More reliance on payment processing solutions involving 3rd parties -> More centralization

Most agree that there is some optimal value for this tradeoff, and the debate is about what it is, how to determine it, how to make decisions revolving it, etc. And most assume that if the blocks are too small, we'll get a "heads up" in the form of increasing tx fees.

However, Mike argues, and I tend to agree, that this is not how it will work. Rather, if the block limit is too small for the transaction demand, the Bitcoin network will crash. The gist of his argument, as I see it, is - market forces should find a fee equilibrium where transaction supply matches demand, whatever they are. However, market forces don't react fast enough to fluctuations, creating buildups that technically can't be handled by the software.

Mike argues also that since we don't know when we'll reach the limit - and once we do, we will have catastrophic failure without warning - we must hurry to raise the limit to remain safe. That part I have an issue with - if Bitcoin can't gracefully degrade in the face of rising transaction volume, we will have problems no matter what the current block size limit is. We should instead focus on fixing that problem.

In this post I will introduce a protocol modification that might eliminate this problem. I will describe the suggestion and analyze how it can help. With it in place, we no longer run the risk of a crash landing, only rising fees - giving us an indication that something should be changed.

And then, we can go back to arguing what the block size should be, given the tradeoff above.

Rollover fee pool
This suggestion requires, as a prerequisite, the use of a rollover fee pool. I first introduced the concept here - this post is worth a read, but it used the concept for a different purpose than we have here.

The original idea was that fees for a transaction will not be paid to the one miner of the block which includes it - rather, fees would be paid into a pool, from which the funds will gradually be paid to future miners. You could have, for example, that in each block, the miner is entitled to 1% of the current funds in the pool (in addition to any other block rewards).

In the current suggestion, we will use such a pool that is paid out over time - but it will not be the users who put money into the pool. Transaction fees will be paid to the miner who found the block as normal.

Edit: Saying again for clarity - In the current proposal, fees from transactions will be paid to the miner of the current block instantly and in full. The miners can't gain anything by accepting tx fees out of band. The one paying into the rollover pool is the miner himself, as explained below.

Elastic block cap
The heart of the suggestion is - instead of forbidding large blocks, penalize them. The miner of a large block must pay a penalty that depends on the block's size. The penalty will be deducted from the funds he collects in the generation transaction, and paid into the rollover pool, to be distributed among future miners. If the penalty exceeds the miner's income of tx fees + minted coins + his share of the current rollover pool, the block is invalid.

This requires choosing a function f that returns the penalty for a given block size. There is great flexibility and there's little that can go wrong if we choose a "wrong" function. The main requirements are that it is convex, and has significant curvature around the size we think blocks should be. My suggestion: Choose a target block size T. Then for any given block size x, set f(x) = Max(x-T,0)^2 / (T*(2T-x)). (graph)

This will mean that there are no penalties for blocks up to size T. As the block size increases, there is a penalty for each additional transaction - negligible at first, but eventually sharply rising. Blocks bigger than 2T are forbidden.

Analysis
I assume we do want scarcity in the blockchain - this prevents useless transactions that bloat the blockchain and make it harder to run nodes, and incentivizes users to fund the Bitcoin infrastructure. A block size limit creates scarcity - but only if there ever really are situations where we reach the limit. But as things stand, reaching the limit creates technical malfunctions.

Mike calls the problem "crash landing", but to me it's more like hitting a brick wall. Transaction demand runs towards the limit with nothing slowing it down, until it painfully slams into it. One minute there is spare room in the blocks, and no reason to charge tx fees - and the next, there's no room, and we run into problems.

If more transactions enter the network than can be put in blocks, the queue will grow ever larger and can take hours to clear, meaning that transactions will take hours to confirm. Miners can use fees to signal to the market to ease up on new transactions, but the market will be too slow to react. First, because the software isn't optimized to receive this signal; and second, because transaction demand is inelastic in short time scales. If, over sustained periods of time, transaction fees are high, I will stop looking for places to pay with Bitcoin, I will sign up to a payment facilitation service, etc. But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.

Enter an elastic cap. When tx demand exceeds the target block value, a backlog doesn't accumulate - miners simply include the extra in the block. They will start to require a bit of extra fees to compensate for the penalty, but can still clear the queue at the rate it is filling. If the incoming tx rate continues to increase, the marginal penalty miners have to pay per tx will increase, so fees will become higher - but since the process is more gradual, clients will be in a better position to understand what fee they need to pay for quick confirmation. The process will resemble climbing a hill rather than running into a brick wall. As we push the limits, the restoring force will grow stronger until we reach an equilibrium.

On longer time scales, the market will have an understanding of the typical fees, and make decisions accordingly. It can also analyze times of the day/week when fees are lower, and use those for any non-time-sensitive transactions.

With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability. With an elastic cap, a longer queue cause transaction fees to be higher, encouraging miners to include more in a block, increasing the speed at which the queue clears. This is a stable equilibrium that prevents the queue from growing unboundedly, while always maintaining some fee tension.

Incidentally, the current system is a special case of this suggestion, with f(x) = If (x
The way forward
I believe there is very little downside, if any, to implementing this method. It can prove useful even if the crash landing scenario isn't as grave as Mike suggests. And the primitives will be handy for other possible protocol improvements. I believe it is an essentially simple and fully fleshed-out idea. As such, I hope it can be accepted without much controversy.

It is however a major change which may require some significant coding. When discussing this idea with Gavin, he explained he'd be in a better position to evaluate it if given working, testable code. Writing code isn't my thing, though. If anyone is interested in implementing this, I'll be happy to provide support.

Related work
A similar system exists in CryptoNote, see e.g. section 6.2.3 of https://cryptonote.org/whitepaper.pdf.

Greg Maxwell proposed a similar system with variable mining effort instead of reward penalty - see e.g. http://sourceforge.net/p/bitcoin/mailman/message/34100485/ for discussion.

This proposal is not directly related to floating block limit methods, where the limit parameters are determined automatically based on recent history of transaction demand.
Pages:
Jump to: