Pages:
Author

Topic: Elastic block cap with rollover penalties - page 6. (Read 24077 times)

sr. member
Activity: 350
Merit: 250
Why would a mod delete this comment from this thread? It was the second comment and it was completely related to the topic.

A reply of yours, quoted below, was deleted by a Bitcoin Forum moderator. Posts are most frequently deleted because they are off-topic, though they can also be deleted for other reasons. In the future, please avoid posting things that need to be deleted.

Quote
Brilliant!


lol man I remembered you when I saw this thread on reddit: https://www.reddit.com/r/Bitcoin/comments/389pq6/elastic_block_cap_with_rollover_penalties_my/
Tongue
legendary
Activity: 3010
Merit: 8114
Why would a mod delete this comment from this thread? It was the second comment and it was completely related to the topic.

A reply of yours, quoted below, was deleted by a Bitcoin Forum moderator. Posts are most frequently deleted because they are off-topic, though they can also be deleted for other reasons. In the future, please avoid posting things that need to be deleted.

Quote
Brilliant!
sr. member
Activity: 350
Merit: 250
My email correspondence with Gavin so far:


lol he just called you an ideas man, your efforts are futile, Bitcoin is the Eldorado and has no flaws no dev will ever adopt something made by other coins, it would imply Satoshi was slight equivocated in his first attempt at blockchain Cheesy
legendary
Activity: 1512
Merit: 1012
Still wild and free
Quote
max block size = 2 * average size of last 144 blocks.

That's not a real limitation. It could easily grow to more than 100Mb in a single week.

I think it should be more like:
max block size = 1.2 * average block size of last 1008 blocks

Using the median instead of the average would make the scheme less prone to the influence of just one or few blocks.
Again, we are describing precisely the scheme used in Monero, proposed in the cryptonote whitepaper (see Sec. 6.2.2): https://cryptonote.org/whitepaper.pdf
legendary
Activity: 1554
Merit: 1021
Quote
max block size = 2 * average size of last 144 blocks.

That's not a real limitation. It could easily grow to more than 100Mb in a single week.

I think it should be more like:
max block size = 1.2 * average block size of last 1008 blocks
donator
Activity: 2058
Merit: 1054
My email correspondence with Gavin so far:

Quote from: Meni Rosenfeld
Hi Mike,

As I was reading your posts about the block size limit, I came to the realization that the problem isn't really that the block size is too low. It's that the protocol lacks a mechanism for graceful degradation.

People expect that as the size limit is approached, fees will elastically adapt. I agree with your arguments that it doesn't work this way, but I think that it should work this way; and if it doesn't now, we should solve that problem. Once we do, the worst that could happen with a block limit too low, is that fees will be too high. Of course, that could require some significant protocol changes.

I've been thinking along the following lines: A miner can create a block of arbitrary size, however, he must pay a penalty for large blocks. This penalty will be deducted from his coinbase transaction, and added to a rollover fee pool, to be collected by future miners (as in https://bitcointalksearch.org/topic/rollover-transaction-fees-80387). The penalty will be a hardcoded function of the block size.

The function should be superlinear; it can optionally be 0 for block sizes up to a given threshold; and it could have a bend around some agreed upon value (1MB, 20MB, whatever) to encourage the size to be around this value. An optimal miner will include a transaction only if the marginal penalty is lower than the fee. As the block size increases, the marginal penalty per kB will be higher, requiring a higher fee.

This is superior to a hard cap in several ways. First, it's always possible for all txs to clear, as long as users are willing to pony up; with a hard cap, even if all users agree to pay more, you still can't include all of their transactions, creating a backlog. Second, the overall behavior of the fees is smoother over time. Instead of the marginal cost per transaction being roughly 0 in low-traffic times and approaching infinity in high-traffic times, it varies continuously with the current traffic. This makes it easier to gather statistics, and to choose the fee to pay accordingly. And you still have a market that adapts to actual economic incentives.

Of course there's more I can say about the analysis of this suggestion, but that's the basic idea. I might post about this somewhere more public, not sure exactly where though...

Meni

Quote from: Gavin Andresen
Mike's on vacation, don't expect a response (and Mike, you're on vacation, you shouldn't be thinking about this stuff....)

My knee-jerk reaction is:  cool, write up a patch for Bitcoin Core (with tests, please) so we can see how extensive the changes would be.  It is easy to have an idea, but there are so many ideas I need a filter to winnow down the number of possibilities or it is impossible to carefully consider them all.  "Go write some code we can look at" is a very good filter.

Other people's knee-jerk reactions will be:  this won't work when the subsidy goes away, so it is a non-starter.  See Greg Maxwell's proposal for "require more mining (higher nBits) to produce bigger blocks" for a scheme that might work when the subsidy goes away.

On a higher level:  I agree that graceful degradation is much better than a hard crash-- that is why I implemented 'smart fees' for Bitcoin Core.

Quote from: Meni Rosenfeld
Hi Gavin,

1. That's a fair request, unfortunately writing code is not where my comparative advantage is. I might be able to persuade others to write the code, though.

There's never a shortage of ideas, of course - but not all ideas are born equal, some are bad, some are good; and some ideas are so obviously bad you don't even need to test them.

2. As I've argued in the past, and in an interview posted today (http://bit-post.com/bitcoiners/interview-with-meni-rosenfeld-the-block-size-limit-and-mining-fee-structure-6105), funding miners when the subsidy goes away is a completely different problem which needs completely different solutions, which have nothing to do with block size.

Anyway, I'm not sure what exactly you mean by "it won't work" - in case you meant that without subsidy there will be nowhere to take the penalty from, of course the penalty can be taken out of tx fees, and the block is illegal if the total penalty is higher than the total fee. So miners will still only accept txs with sufficiently high fees.

Quote from: Meni Rosenfeld

Quote from: Gavin Andresen
Interesting.  How do we decide what "T" should be ?

My knee-jerk reaction: I bet a much simpler rule would work, like:

   max block size = 2 * average size of last 144 blocks.

That would keep the network at about 50% utilization, which is enough to keep transaction fees falling from to zero just due to people having a time preference for having transactions confirmed in the next 1/2/3 blocks (see http://hashingit.com/analysis/34-bitcoin-traffic-bulletin ).

I think this simple equation is very misleading:
  Bigger blocks -> Harder to run a node -> Less nodes -> More centralization

People are mostly choosing to run SPV nodes or web-based wallets because:

  Fully validating -> Less convenience -> Less nodes -> More centralization

Node count on the network started dropping as soon as good SPV wallets were available, I doubt the block size will have any significant effect.


Also: Greg's proposal:
  http://sourceforge.net/p/bitcoin/mailman/message/34100485/

Quote from: Meni Rosenfeld
Hi Gavin,

1. a. I don't believe in having a block limit calculated automatically based on past blocks. Because it really doesn't put a limit at all. Suppose I wanted to spam the network. Now there is a limit of 1MB/block so I create 1MB/block of junk. If I keep this up the rule will update the size to 2MB/block, and then I spam with 2MB/block. Then 4MB, ad infinitum. The effects of increasing demand for legitimate transaction is similar. There's no real limit and no real market for fees.

b. I'll clarify again my goal here is not to solve the problem of what the optimal block limit is - that's a separate problem. I want to prevent a scenario where a wrong block limit creates catastrophic failure. With a soft cap, any parameter choice creates a range of legitimate block sizes.

You could set now T = 3MB, and if in the future we see that tx fees are too high and there are enough blocks, increase it.

2. I have described one causal path. Of course SPV is a stronger causal path but it's also completely irrelevant, because SPV clients are already here and we don't want them to go away. They are a given. Block size, however, is something we can influence; and the primary drawback of bigger blocks is, as I described, the smaller number of nodes.

You can argue that the effect is insignificant - but it is still the case that

    Many people currently do believe the effect is significant, and
    This argument will be easier to discuss once we don't have to worry about crash landing.

3. Thanks, I'll try to examine Greg's proposal in more detail.

Meni

Quote from: Gavin Andresen
On Tue, Jun 2, 2015 at 5:37 PM, Meni Rosenfeld wrote:

    1. a. I don't believe in having a block limit calculated automatically based on past blocks. Because it really doesn't put a limit at all. Suppose I wanted to spam the network.


Who are "you" ?

Are you a miner or an end-user?

If you are a miner, then you can produce maximum-sized blocks and influence the average size based on your share of hash rate. But miners who want to keep blocks small have equal influence.

If you are an end-user, how do you afford transaction fees to spam the network?

----------------------

If you are arguing that transaction fees may not give miners enough reward to secure the network in the future, I wrote about that here:
   http://gavinandresen.ninja/block-size-and-miner-fees-again
and here:
   https://blog.bitcoinfoundation.org/blocksize-economics/

And re: "there is no real limit and no real market for fees" :  see
  http://gavinandresen.ninja/the-myth-of-not-full-blocks

There IS a market for fees, even now, because there is demand for "I want my transaction to confirm in the next block or three."

Quote from: Meni Rosenfeld
1. I'm an end user.

If there are hard coded rules for tx fees and spam prevention, then that is what is ultimately keeping the block size in check, not the block limit.

If there are none, and the only source of fees is competition over the limited block size, then there will be no real competition (for the reason I mentioned - the limit keeps increasing), and I will not have to pay any fees.

In both cases, the floating block limit doesn't do much.

2. I argue, as I always do, that funding miners for the hashing should not have anything to do with the data size of transactions and blocks.

In the current argument I'm not talking about the amortized cost of hashing. I'm talking about paying for the marginal cost of handling transactions (which does depend on size), and that the fees will make their way to the nodes bearing these costs. Under that assumption, I want to make sure people are actually paying fees for the resources consumed - and for that, I want to keep supply in check.

3. There is indeed a fee market, when the variability in the rates of clearing and adding txs exceeds the difference between the block limit and the global average tx rate. However, at low-traffic times, rational markets will not require significant fees. As a spammer I can use that time to create spam and trick the recalibration mechanism. As a legitimate user, I could use this time to send non-urgent txs. This reduces variability and works to stretch the limit.

Perhaps automatic calibration can work with a good enough mechanism, but I think it's more dangerous than occasionally updating a hardcoded value.
donator
Activity: 2058
Merit: 1054
But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.

This part seems a bit off. At any given time, some people will have an urgent need for settlement, but many/most won't. So we get smooth scaling for quite a while from a purely economic perspective. Now once we reach a point in adoption where there are so many urgent transactions that they fill the blocks on their own, that kicks up the frustration to unacceptable levels, but even then some will be willing to outbid others and again it's a gradual increase in pain, not a deluge.

Insofar as the prices miners charge do rise properly and users have an easy way of getting their transactions in at some price, fees will limit transactions even in the short term. All you're really describing here is reaching a point that is pretty far along that smooth pain curve, after all the less important transactions have been priced out of the market.

Overall this is a great idea, though!
It's difficult to know exactly how the quantitative factors will play out exactly. The inelasticity is not total, but I believe it is significant, and contributes to the phenomenon. Even if things will not be as catastrophic as Mike describes, I believe they can get rather ugly, so any change that alleviates it is welcome.


are you suggesting we drop btc and pick up vtc?
Not familiar with it.


An elastic supply is very important, but I think it can be accomplished more simply, without a pool.

Allow blocks to be expanded beyond their "nominal" size with high fee transactions.  The higher the fee, the further it can appear in the block.  Formally, define a function fee = T(x), where x is the location in the block.  If a transaction's fee is >= T(x), it can be placed in the block at location x.  T(x) = 0 for all x < 8MB (say) and increases super-linearly from there.
This could work, but:

1. I'm not convinced it's actually simpler. If I understand it correctly, it requires, among other things, sorting the transactions by fee. Verification also requires examining each individual transaction in a rather elaborate way.
2. I think it's much harder to analyze how it will play out economically; and my initial thought is that it will be less stable. In my suggestion, the fee will be more or less consistent over txs, for any given load level. Here, some txs will be accepted with 0 fee and some will require very high fees; it will be difficult for each transaction to decide where it wants to go, and they can oscillate wildly between states.


EDIT: the biggest problem with this class of proposal is sizing the fee.  Especially given bitcoin's volatility.  However, if the fee function chosen starts at 1 satoshi, a high bitcoin price will tighten the elasticity of supply (in practice) but not entirely remove it.  At the same time, we STILL need to grow the "nominal" block size: i.e. 8MB + 20% per year, or risk pricing out personal transactions as adoption increases.  However, this class of proposal allows the network to react in a classic supply/demand fashion.  This reduces the pain when supply is exceeded, meaning that a "last-minute" hard fork as proposed by many of Gavin's opponents would be a lot less damaging to the network (block size increases could trail adoption rather than precede it).
This is the reason I chose a hyperbolic function rather than a polynomial one. Being hyperbolic means a wide range of marginal costs is covered with a relatively small span of block sizes. So whatever the reasonable fee should be, the system will find a block size that matches it.
legendary
Activity: 2128
Merit: 1005
ASIC Wannabe

The key here is how is T set. If T is fixed then 2T becomes the hard limit and the problem remains. If T is set based on an some average of previously mined blocks then this may address the problem
We still need some way to determine the optimal block size, but we have much more leeway. The wrong choice will not cause catastrophic failure, rather gradually increasing fees which will indicate that a buff is needed. The flexibility will make it easier to reach community consensus about changing hardcoded parameters.

Reusing what I wrote to Gavin an a private exchange - I don't believe in having a block limit calculated automatically based on past blocks. Because it really doesn't put a limit at all. Suppose I wanted to spam the network. Now there is a limit of 1MB/block so I create 1MB/block of junk. If I keep this up the rule will update the size to 2MB/block, and then I spam with 2MB/block. Then 4MB, ad infinitum. The effects of increasing demand for legitimate transaction is similar. There's no real limit and no real market for fees.

Perhaps we can find a solution that uses an automatic rule for short-term fluctuations, and hardcoded parameters for long-term trends. If a good automatic cap rule can be found, it will be compatible with this method.


+1 to that.  I think a max thats determined as either:
T = 2.50*(average(last 8000 blocks))         #T is set the average transactions for the last ~2 months. Plenty of room for slow and steady growth, and too great a timespan to attack the blockchain with spam. keep in mind that transactions at night will probably be 1/5th the volume of those during business hours
or
T = (2.00*(average(last 8000 blocks))) + (0.50*(average(last 144 blocks)))     #This would allow short-term fluctuations that take a day or two to develop. Could be susceptible to a spam attack that last longer that 3+ days.

personally, i think a block limit thats set based on the average volume of the last 1-3 months would be fine. It would be flexible if the number of transactions increases very quickly, and could grow to 3-8x the maximum within a year if theres substancial volume. combined with your proposal above it could be extremely flexible. However...

I'm EXTREMELY cautious of altering how fees are created and distributed, as any changes made will directly impact miners and could lead to bribery and corruption of the bitcoin code to better pay the centralised mining companies. Any code changes that are implemented should not involve factors or values that will need to be adjusted down the road, or it will simply lead to a 'democracy' of core-qt 'improvement'
legendary
Activity: 1106
Merit: 1026
If T=3MB it's like a 6MB limit with pressure to keep blocks smaller than 3MB unless there are enough transactions paying fees so it's worth including them?

I think T should scale over time as bandwidth is growing. 42 transactions per second is still a low limit for a global payment network.

As far as I can see, and given that:

Obviously increasing the block size requires a hard fork, but the [penality] fee pool part could be accomplished purely with a soft fork.

Then it would be possible to raise the block size limit to 6, 20, 40, ... MB, but introduce a soft cap and a penality mechanism for "large" blocks. The penality function (and thus soft cap) may be freely adjusted over time, as long as the resulting block size doesn't exceed the hard limit.

The process will resemble climbing a hill rather than running into a brick wall.

Very well put, I like it.
legendary
Activity: 1246
Merit: 1010
An elastic supply is very important, but I think it can be accomplished more simply, without a pool.

Allow blocks to be expanded beyond their "nominal" size with high fee transactions.  The higher the fee, the further it can appear in the block.  Formally, define a function fee = T(x), where x is the location in the block.  If a transaction's fee is >= T(x), it can be placed in the block at location x.  T(x) = 0 for all x < 8MB (say) and increases super-linearly from there.


Note that this proposal does NOT look at the fees in aggregate -- the max block size <= S(sum(fees)), where S is some super-linear function.  That does not work because a miner could create a dummy transaction that pays himself a very large fee, thereby increasing the block size to allow space for a lot of low fee transactions.

Meni may have added the idea of a pool to solve the above problem.  But I believe that it is more easily solved by not looking at fees in aggregate.


EDIT: the biggest problem with this class of proposal is sizing the fee.  Especially given bitcoin's volatility.  However, if the fee function chosen starts at 1 satoshi, a high bitcoin price will tighten the elasticity of supply (in practice) but not entirely remove it.  At the same time, we STILL need to grow the "nominal" block size: i.e. 8MB + 20% per year, or risk pricing out personal transactions as adoption increases.  However, this class of proposal allows the network to react in a classic supply/demand fashion.  This reduces the pain when supply is exceeded, meaning that a "last-minute" hard fork as proposed by many of Gavin's opponents would be a lot less damaging to the network (block size increases could trail adoption rather than precede it).

legendary
Activity: 1064
Merit: 1000
My proposal (on bitcoin-development and previously on the forum) is effectively (and explicitly credited to) the monero/bytecoin behavior, but rather than transferring fees/subsidy it changes the cost of being successful at the work function.

This is the most attractive concept I have seen yet for dynamic scaling which places a penalty by increasing the required difficulty target for miners building >1MB blocks. Do you have any idea of how that penalty could be calculated? I assume it would scale according to the percentage size increase above MAX_BLOCK_SIZE. I believe this would work because miners would not be incentivised to build builder blocks there was a need to, because prematurely doing so would put them at a disadvantage. This would also help in building fee pressure which will become more and more important as subsidy decreases.

legendary
Activity: 2114
Merit: 1090
=== NODE IS OK! ==
are you suggesting we drop btc and pick up vtc?
legendary
Activity: 1232
Merit: 1094
It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork. 

The coinbase of the transaction must pay BTC to OP_TRUE as its first output.  Even if there is no size penalty, the output needs to exist but pay zero.

The second transaction must be the fee pool transaction.

The fee pool transaction must have two inputs; the coinbase OP_TRUE output from 100 blocks previously and the OP_TRUE output from the fee pool transaction in the previous block. 

The transaction must have a single output that is 99% (or some other value) of the sum of the inputs paid to OP_TRUE.


By ignoring fees paid in the block, it protects against miners using alternative channels for fees.
donator
Activity: 2058
Merit: 1054
Unlike Meni's suggestion, the reduction in block subsidy is not given to a pool but rather deferred to future miners because the subsidy algorithm is based around the number of coins.
Well, the pool does have the ultimate effect of deferring rewards to future miners.

See section 6.2.3 of the CryptoNote whitepaper: https://cryptonote.org/whitepaper.pdf
Interesting. I argue that regardless of other issues, the particular function they suggest is not optimal, and the cap it creates is too soft.

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
This is only (potentially) a problem in my 2012 rollover fee proposal. Here, tx fees don't go into the pool - fees are paid to the miner instantly. It is only the miner's block size penalty that goes in to the pool, and he must pay it if he wants to include all these transactions.

Of course, if you'd like to post this criticism on that thread, I'll be happy to discuss it.


Just a comment on the following:

Quote
With a hard cap, the queue of transactions can only clear at a specific rate. Below this rate there is no fee tension, and above it there is instability.

I don't think you can say that - that would be like saying, queues are never a problem as long as utilization is < 1 (which of course is required for stability). But long queues do in fact develop when utilization is < 1, due to variability in service / arrival times (re: bitcoin, the dominant source of variability is in inter-block times).

As long as there are queues, fee tension will be present, as mempool transactions are largely prioritised by feerate. Empirically, we are observing periods of fee tension (i.e. busy periods, when pools' max block size is reached) quite often these days.

Otherwise I like this perspective on the block size problem (even though I can't really comment on the proposed solution), in particular the observation that in the short term transaction demand is inelastic. (Upon further thought, however, proponents of increasing block size would say that the inelastic demand problem doesn't really apply if the block size cap is sufficiently higher than the average transaction rate.)
That's true in general; however, for the specific time scales and fluctuations in queue fillrate we have here, I'd say that "no fee tension" may be an exaggeration, but captures the essence.

Sure, if the block limit is high enough we can always clear transactions... But then there will be little incentive to give fees or conserve space on the blockchain. The post assumes that we agree that too low is bad, too high is bad, and we don't want to be walking a thin rope in between.


(1) bypass vulnerability (where you pay fees, or the like, out of band to avoid the scheme)
I don't think this is an issue here. Transaction fees are paid instantly in full to miners, so users have no reason to pay fees out of band. Miners are forced by protocol to pay the penalty if they want to include the transactions (and if they don't include the transactions, they're not doing anything they could be paid for). You could talk about miners paying penalty into the pool hoping to only get it back in future blocks, but with a pool that clears slowly enough, that's not a problem.

(2) scale invariance  (the scheme should work regardless of Bitcoin's value).
In the space of block sizes, you do need to occasionally update the parameter to match the current situation. I think it's best this way - currently only humans can reliably figure out if the fees are too high or node count is too low. But if a good automatic size adjustment rule can be found, it can be combined with this method.

In the space of Bitcoin value, transaction fees and hardware cost, the proposed function is multi-scaled and thus cares little about all of that. For any combination of the above, the function will find an equilibrium block size for which the penalty structure makes sense.

I think this kind of proposal is a massive improvement on proposals without this kind of control; and while it does not address all of the issues around larger blocks-- e.g. they do not incentive align miners and non-mining users of the system-- it seems likely that proposals in this class would greatly improve a some of them of them; and as such is worth a lot more consideration.
I'm still hoping Red Balloons, or something like it, will solve that problem.

Thanks for posting, Meni-- I'm looking forward to thinking more about what you've written.
Thanks, and likewise - I've only recently heard about your effort-penalty suggestion, I hope to be able to examine it and do a comparison.


I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.
Right, the proposal here only adds a constant amount of calculations per block, taking a few microseconds. It doesn't even grow with the number of transactions.

I would scrap the fee pool and use the function the opposite way: the total sum of fees paid in the block defines the maximum block size. The seeding constant for this function could itself be inversely tied to the block difficulty target, which is an acceptable measure of coin value: i.e. the stronger the network, the higher the BTC value, and reciprocally the lower the nominal fee to block size balance point.

With an exponential function in the fashion of Meni's own, we can keep a healthy cap on the cost of larger blocks, which impedes spam by design, while allowing miners to include transactions with larger fees without outright kicking lower fee transactions out of their blocks.

As Meni's proposal, this isn't perfect, but I believe it comes at the advantage of lower implementation cost and disturbance to the current model, while keeping the mechanics behind block size elasticity straight forward. Whichever the community favors, I would personally support a solution that ties block size limit to fees over any of the current proposals.
This is similar to the idea of eschewing a block limit and simply hardcoding a required fee per tx size. The main issue I have with this kind of ideas is that it doesn't give the market enough opportunity to make smart decisions, such as preferring to send txs when traffic is low, or to upgrade hardware to match demand for txs.

Another issue with this is miner spam - A miner can create huge blocks with his own fee-paying txs, which he can easily do since he collects the fees.

Using difficulty to determine the size/fee ratio is interesting. I wanted to say you have the problem that difficulty is affected not only by BTC rate, but also by hardware technology. But then I realized that the marginal resource costs of transactions also scales down with hardware. The two effects partially cancel out, so we can have a wide operational range without much need for tweaking parameters.
legendary
Activity: 1036
Merit: 1000
But short-term, if I have a transaction I'm set on sending right now (e.g. a restaurant tab), I'll be willing to pay very high fees for it if I must. So fees are not effective in controlling the deluge of transactions.

This part seems a bit off. At any given time, some people will have an urgent need for settlement, but many/most won't. So we get smooth scaling for quite a while from a purely economic perspective. Now once we reach a point in adoption where there are so many urgent transactions that they fill the blocks on their own, that kicks up the frustration to unacceptable levels, but even then some will be willing to outbid others and again it's a gradual increase in pain, not a deluge.

Insofar as the prices miners charge do rise properly and users have an easy way of getting their transactions in at some price, fees will limit transactions even in the short term. All you're really describing here is reaching a point that is pretty far along that smooth pain curve, after all the less important transactions have been priced out of the market.

Overall this is a great idea, though!
sr. member
Activity: 352
Merit: 250
https://www.realitykeys.com
If we're making drastic changes it may be worthwhile to target balance of unspent outputs instead of / as well as block size, since UTXO size seems to be a more likely bottleneck than the network, unfortunate American core devs still connected to the internet using 1870s-style copper wires notwithstanding.
staff
Activity: 4284
Merit: 8808
I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.
legendary
Activity: 3766
Merit: 1364
Armory Developer
I personally think a P2P network cannot rely on magic numbers, in this case 1MB nor 20MB blocks. The bar is either set too low and it creates an artificial choke point, or it is set too high which opens room for both DOS attacks and more centralization. As such, a solution that pins block size to fees is, in my point of view, the most sensible alternative to explore. Fees are the defacto index to determine transaction size and priority, so they are also the best candidate to determine valid block size.

However I have a couple divergence with Meni's proposal:

First, while I find the fee pool idea intriguing, and I certainly see benefits to it (like countering "selfish" mining), I don't believe it is a necessary device to pin block size limits to fees. Simply put, I think it's a large implementation effort, or if anything a much larger one than is needed to achieve the task at hand. It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Second, I expect a fee pool alone will increase block verification cost. If it is tied to the block size validity as well, it would increase that cost even further. The people opposing block size growth base it on the rationale that an increase in resource to propagate and verify blocks effectively raises the barrier to entry of the network, resulting in more centralization. This point has merits and thus I think any solution to the issue needs to keep the impact on validation cost as low as possible.

Meni's growth function still depends on predetermined constants (i.e. magic numbers), but it is largely preferable to static limits. Meni wants to use it to define revenue penalties to miners for blocks larger than T.

I would scrap the fee pool and use the function the opposite way: the total sum of fees paid in the block defines the maximum block size. The seeding constant for this function could itself be inversely tied to the block difficulty target, which is an acceptable measure of coin value: i.e. the stronger the network, the higher the BTC value, and reciprocally the lower the nominal fee to block size balance point.

With an exponential function in the fashion of Meni's own, we can keep a healthy cap on the cost of larger blocks, which impedes spam by design, while allowing miners to include transactions with larger fees without outright kicking lower fee transactions out of their blocks.

As Meni's proposal, this isn't perfect, but I believe it comes at the advantage of lower implementation cost and disturbance to the current model, while keeping the mechanics behind block size elasticity straight forward. Whichever the community favors, I would personally support a solution that ties block size limit to fees over any of the current proposals.


staff
Activity: 4284
Merit: 8808
There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
That is the main aspect of what monero/bytecoin does that I complained about-- that you can simply pay fees out of band and bypass it as the subsidy declines (even with the constant inflation the subsidy might be inconsequential compared to the value of the transactions if the system took off).  In Bitcoin this is not hypothetical since 2011 at least pools have accepted out of band payments and its not unusual for varrious businesses to have express handling deals with large pools; and this is absent any major reason to pull fees out of band.

My proposal (on bitcoin-development and previously on the forum) is effectively (and explicitly credited to) the monero/bytecoin behavior, but rather than transferring fees/subsidy it changes the cost of being successful at the work function.

I haven't had a chance yet to read and internalize the specifics of what Meni is suggesting (or rather, I've read it but the complete, _precise_ meaning isn't clear to me yet).  The main things to watch out for solutions of this class are (1) bypass vulnerability (where you pay fees, or the like, out of band to avoid the scheme)  and (2) scale invariance  (the scheme should work regardless of Bitcoin's value).  My proposal used effort adjustment (it's imprecise to call it difficulty adjustment, though I did, because it doesn't change the best chain rule; it just changes how hard it is for a miner to meet that rule).

I think this kind of proposal is a massive improvement on proposals without this kind of control; and while it does not address all of the issues around larger blocks-- e.g. they do not incentive align miners and non-mining users of the system-- it seems likely that proposals in this class would greatly improve a some of them of them; and as such is worth a lot more consideration.

Thanks for posting, Meni-- I'm looking forward to thinking more about what you've written.
full member
Activity: 180
Merit: 100
By George you have done it old boy! This solves so many problems.

Quote
We still need some way to determine the optimal block size, but we have much more leeway. The wrong choice will not cause catastrophic failure

Fantastic, we can just take our best guess without pulling our hair out with worry of what disasters will insure if the guess is wrong. Perhaps 8mb + 20% per year elastic cap where those values can be adjusted with a soft fork.

What if the rollover fee could become the full node reward?
Pages:
Jump to: