Pages:
Author

Topic: Elastic block cap with rollover penalties - page 5. (Read 24077 times)

legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Can you explain a bit about the mechanism wherein the miner pays into the rollover pool, and why that is different from the 'original proposal'?
The difference is quantitative. In this version the rollover effects only blocks that exceed a threshold of size.

OK, so its similar to Monero.
But there is a difference between this proposal and what Monero does that appears that it might need to be addressed.

The rollover pool creates an incentive for the miner to not use the fee pooling, and instead contract directly with the TX creators.
If implemented as written, this could become a problem.  Large TX creators and large miners would have an incentive to cartel because of the way this rollover pool works.

Monero avoids this problem, but most of the rest of this proposal has been implemented and running for quite a long time.  Its not new, or novel, except in ways that it is not as good.
An examination of the prior art is warranted.
sr. member
Activity: 433
Merit: 267
Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

Apparently, neither does Gavin.
He said he didn't want to talk to you until there was working code that does it?
Such code has been working for years, but people forget where the experimentation is occurring, the alts.

Can you explain a bit about the mechanism wherein the miner pays into the rollover pool, and why that is different from the 'original proposal'?
The difference is quantitative. In this version the rollover effects only blocks that exceed a threshold of size.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Isn't it precisely what is implemented in Monero? (except you don't have a rollover pool, the penalty is simply deducted from the block reward for good).
No idea what happens in Monero, but if so, more power to them.

Apparently, neither does Gavin.
He said he didn't want to talk to you until there was working code that does it?
Such code has been working for years, but people forget where the experimentation is occurring, the alts.
A good place to start:
https://github.com/monero-project/bitmonero/blob/54fbf2afb3bc029823ed6c200e08bd21fe42ac10/tests/unit_tests/block_reward.cp
and
https://github.com/monero-project/bitmonero/blob/c41d14b2aa3fc883d45299add1cbb8ebbe6c9ed8/src/cryptonote_core/blockchain.cpp#L2230-L2244

Can you explain a bit about the mechanism wherein the miner pays into the rollover pool, and why that is different from the 'original proposal'?  It is not obvious why this dictinction makes a difference.  It seems to still incentivize out of chain payments to miners for transaction inclusion regardless of whether it is paid by the miner, or deducted from the miner's reward, both are dependent on the fees in the block (which aren't there in out of block payments schemes).
legendary
Activity: 2128
Merit: 1005
ASIC Wannabe
however, the median transaction fee drastically drops as a result, and in a continued attack soon you could see that ~10% of transactions pay more than the median fee.

And if we use:
Code:
% of transactions with a fee higher than (median transaction fee of transactions with a fee)

This excludes zero fee transactions in the calculation of the median transaction fee.
In that case a large quantity of zero fee transactions (and spam) does not have influence on the median transaction fee.
perhaps a threshold? The problem is that spam is usually sent as a single transaction with 0.001btc fees and hundreds of recipients 0.0000001BTC that get. There should be protection in place that ensures sending thousands or millions of dust payments becomes prohibitively expensive, without limiting its use for colored coins or increasing fees to the regular user.

right now, the only sort of filter for this is the miners deciding whether to include or not include a transaction, and will include a spam transaction if theres a justifiable fee to do so.


IMO, the simplest way to specify the max size is as
Code:
Max Block Size = [2.50 * (average size of last 6000 blocks)] + [0.50 * (average size of last 600 blocks)]
That provides a max size thats 3x the size of the average block over the past 1000hrs (41days). If demand suddenly rises, the max size can double within a week.

say current block is ~0.5MB on average. The new size cap under this formula is 1.5MB (50% larger than it is now)

assume by the end of 2015 theres now 2MB/block on average, with business hours being around 5MB/block.
Code:
Max Block Size = [2.50 * (2)] + [0.50 * (~3MB during a busy week)] = 6.5MB
assume by the end of 2016 theres now 12MB/block on average, with business hours being around 30MB/block.
Code:
Max Block Size = [2.50 * (12)] + [0.50 * (~20MB)] = 40MB
assume that in the following week bitcoin gets popular, and suddenly there are 30-40MB blocks almost 24/7. within 4 days, the blocksize will increase to
Code:
Max Block Size = [2.50 * (~14)] + [0.50 * (~35MB)] = 52.5MB
pvz
newbie
Activity: 53
Merit: 0
however, the median transaction fee drastically drops as a result, and in a continued attack soon you could see that ~10% of transactions pay more than the median fee.

And if we use:
Code:
% of transactions with a fee higher than (median transaction fee of transactions with a fee)

This excludes zero fee transactions in the calculation of the median transaction fee.
In that case a large quantity of zero fee transactions (and spam) does not have influence on the median transaction fee.
sr. member
Activity: 433
Merit: 267
I think it's interesting to switch the conversation over to a soft-failure rather than trying to find the appropriate answer to the block size for now.

That said, a problem with any kind of rollover fee is that it assumes that moving fees to future blocks is also moving fees to different nodes.

Put differently; centralizing nodes is a way of avoiding the penalties you're trying to introduce with this protocol.

Put differently again; Paying fees over consecutive blocks gives a competitive advantage to larger mining entities when making larger blocks.

Put triply differently; A node that can reliably get another block within X blocks is less penalized than a node that cannot, where "X" is the number of blocks that the rollover fee is given.

So if the goal is to avoid centralization, then the protocol does the opposite of the intention. If the goal is to make Bitcoin fail-safe, I'm not convinced that Bitcoin isn't already. When blocks fill, we will see higher transaction fees, potentially lengthier times before a transaction is included in a block, and as a result more 3rd party transactions.

Rereading Mike Hearns article1, changing bitcoin to include highest fee transactions into the mempool should result in the behavior I described. An end user might see delays before their transaction is included in a block, but I wouldn't call that a "crash landing", considering that the sorts of transactions that would be done at these rates are not as concerned about the speed of confirmation.

TLDR: How does a fee over "X" blocks not incentivize a centralization of nodes?

1https://medium.com/@octskyward/crash-landing-f5cc19908e32
member
Activity: 554
Merit: 11
CurioInvest [IEO Live]
I really like Meni's elastic block cap proposal.

Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.  

Absolutely. It's worth clarifying that the actual mechanics of how the pool fee works, (particularly how it is calculated), does not have to be on the critical path to resolving the 1MB problem.
All that is necessary today for Meni's proposal is to agree that a pool fee, of some make-up, can usefully exist.

Then it is possible to look at a simple implementation plan which overcomes the urgency of dealing with the 1MB, does not raise the limit too much, and allows time for an elastic cap with rollover penalties to be fully worked out, modeled, developed, and tested. As mentioned before, the pool fee could incorporate a function of block delta utxo and sigops.

Phase I:
Hard Fork to increase the max block size to 2T, e.g. via block version 4.
2T might be in the region of 6 or 8MB which also scales at a fixed percentage each year, say 20%, or a fixed multiple (e.g. 4x) of the recent average 144 or 2016 blocks. However, the difference this time is that no miner will be able to mine a block larger than T without paying a pool fee, but this won't be possible because it requires a supermajority on version 5 blocks to vary the pool fee from zero.

Phase II:
Soft fork to implement the full elastic cap, effective by supermajority, e.g. via block version 5, when blocks between T and 2T can then be mined.

Advantages:
Decoupling dealing with the hard-limit, from the making of a graceful decay in network performance as the limit is approached.
No urgency on how to best to set the pool fee, lots of time for debate and modelling.
A yearly scaling percentage can be more approximate because it should be easier to schedule hard-fork revisions to this as and when changes in global computing technology dictate.

Since the blocksize issue is controversial and may take some time to settle, we are better off implementing this elastic cap right now with a softfork (your phase II) and skip the hardfork part (phase I).

We could do that by choosing T=0.5MB (2T = 1MB = current maxblocksize).
legendary
Activity: 2128
Merit: 1005
ASIC Wannabe
Code:
max block size = 10 * (median size of last 144 blocks) * (% of transactions with a fee higher than median transaction fee for last 144 blocks * 2)

Code:
max block size = 20 * (median size of last 1200 blocks) * (% of transactions with a fee higher than median transaction fee for last 600 blocks)

I think the above simplifies your equation slightly, and uses better timeframes (~200hrs and ~50hrs) to compute the averages.

my biggest concern is what would happen in a spam attack, which could easily last several days, to as much as a week, if a quick-changing algorithm can be played to drastically increase blocksize during that time.

lets look at your original equation: assume that blocks are 0.5MB for the last 144 blocks (thus the block limit will probably be set to something like 5-10MB), but then spam attacks start. you see every block filled with 5MB of spam, 95% of transactions below the median for fees. after 144 blocks of this (1 day), you get something like this:
Code:
max block size = 10 * (5MB) * (0.05 * 2) = 50*0.1 = 5MB
which seems to resist growing any larger.

however, the median transaction fee drastically drops as a result, and in a continued attack soon you could see that ~10% of transactions pay more than the median fee.

Code:
max block size = 10 * (5MB) * (0.1 * 2) = 10MB
   If this attack persisted for 3-4 days you could see the blocksize increase to >20MB very fast, and >60MB within a week (would likely require very well-funded spammers though)

longer timeframes are absolutely necessary, bare minimum should be 600 blocks for calculations.
pvz
newbie
Activity: 53
Merit: 0
Why not consider % of transactions with a fee?

This way users have direct influence in block size by paying a transaction fee (above median).
With paying a transaction fee users have a vote in the 'free rider effect'/spam and if current levels are acceptable.

1. small block size -> only transactions with fee will be processed fast (others get parked in mem. pool)
2. big block size -> a lot of transactions have a fee and a lot transactions without a fee (spam?) are accepted
3. median block size -> a balance between transactions with and without a fee

Also added a bigger scale (due to near future (holiday) transaction peaks).

I used Gavin's proposal as starting point:
Code:
max block size = 10 * median size of last 144 blocks * % of transactions with a fee higher than median transaction fee for last 144 blocks (e.g.0.36) * 2
donator
Activity: 2772
Merit: 1019
I like this proposal. It takes away much of the destructiveness of hitting the block size limit. Makes 'impact' softer and actually allows for a market to function: with higher aggregate fees (higher tx demand), economically usable blocksize can actually increase, which is not the case with the current 'hard' implementation: no amount of higher aggregate fees will increase block space. That doesn't facilitate a market.

It would be interesting to hear from some core devs. Sounds to me the proposal could be acceptable to Peter Wuille and maybe even Peter Todd, for example, since the solution still offers to deliver economic incentive to develop scaling solutions.

Maybe this really could be (worked into) something with broad consensus.

I understand we will still argue about T, but it will be much easier because the consequences of choosing it wrongly aren't as dire.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Nice proposal. It is exciting to hear some carefully thought, incentivize based design


I think the current design already incentivize smaller blocks: Smaller blocks get broadcasted much faster and become less possible to be orphaned. If you consider that there are 25 coins to compete for, you would like to broadcast your block as fast as possible once you find it

With bigger blocks cap, it becomes more favorable to mine smaller blocks. 10MB blocks have a very high risk of being orphaned by 1MB blocks, since the time needed to broadcasting them will be much longer than 1MB blocks, maybe several minutes longer

However, if everyone is aiming for the smallest block possible, then most of the transactions will not be included. So far we have not run into this problem because the broadcasting speed is still relatively fast at current block size. But if one day it happens, a natural result is the bigger blocks will ask for more fee due to higher risk of being orphaned, how mathematically it is formulated is difficult to say without some real world cases. I guess it will be very similar to what OP described, above certain threshold, the fee will increase exponentially due to the risk of being orphaned also increase exponentially

While searching for block propagation data, I found out an article from Gavin. His Invertible Bloom Look-up Tables proposal will incentivize all the miners to include similar set of transactions to speed up the block propagation time (miners do not need to broadcast transactions that other peers already have in their memory pool, only block header and some extra transactions), but that seems to be a major change further down the road

https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2
member
Activity: 129
Merit: 14
Meni

Thanks for this great proposal.  Please could you explain how the rollover fee pool helps out and why the penalty cant just be burnt? Which would be more simple.

There is a major shortcoming I can see in the rollover fee pool suggestion: miners are incentivized to accept fees out of band so they can obtain all the highest fees instantly, thus defeating the entire purpose of that feature.
This is only (potentially) a problem in my 2012 rollover fee proposal. Here, tx fees don't go into the pool - fees are paid to the miner instantly. It is only the miner's block size penalty that goes in to the pool, and he must pay it if he wants to include all these transactions.

Just so I can understand properly, your penalty is related to a formula that excludes the transaction fees, therefore paying fees out of band won't reduce the penalty.  Is that how the problem is solved?

I think this idea seems really good, but I need to think a bit more.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
I really like Meni's elastic block cap proposal.

Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.  

Absolutely. It's worth clarifying that the actual mechanics of how the pool fee works, (particularly how it is calculated), does not have to be on the critical path to resolving the 1MB problem.
All that is necessary today for Meni's proposal is to agree that a pool fee, of some make-up, can usefully exist.

Then it is possible to look at a simple implementation plan which overcomes the urgency of dealing with the 1MB, does not raise the limit too much, and allows time for an elastic cap with rollover penalties to be fully worked out, modeled, developed, and tested. As mentioned before, the pool fee could incorporate a function of block delta utxo and sigops.

Phase I:
Hard Fork to increase the max block size to 2T, e.g. via block version 4.
2T might be in the region of 6 or 8MB which also scales at a fixed percentage each year, say 20%, or a fixed multiple (e.g. 4x) of the recent average 144 or 2016 blocks. However, the difference this time is that no miner will be able to mine a block larger than T without paying a pool fee, but this won't be possible because it requires a supermajority on version 5 blocks to vary the pool fee from zero.

Phase II:
Soft fork to implement the full elastic cap, effective by supermajority, e.g. via block version 5, when blocks between T and 2T can then be mined.

Advantages:
Decoupling dealing with the hard-limit, from the making of a graceful decay in network performance as the limit is approached.
No urgency on how to best to set the pool fee, lots of time for debate and modelling.
A yearly scaling percentage can be more approximate because it should be easier to schedule hard-fork revisions to this as and when changes in global computing technology dictate.
legendary
Activity: 3010
Merit: 8114
Well, I think there is some policy about writing comments that don't add new content of their own and just support previous content, or something. I was happy to see your feedback. As I recall, you posted your comment quite early, before there was a real need for bumping, but I appreciate the intent.

Hey no problem. And I always feel slightly smarter after having read one of your theses, so thank you for that. As no doubt others have told you in the past, you have quite a gift for thinking outside the box in a non-profiteering sort of way, a quality which the cryptocurrency community is in dire need of.

Cheers from Hawaii,

Nutildah the Hungry
legendary
Activity: 3766
Merit: 1364
Armory Developer
This is similar to the idea of eschewing a block limit and simply hardcoding a required fee per tx size.

I assume you are referring to the debate on "hard block size limit + organic fees" versus "no block size limit + hard fees", the third option (no block limit and organic fees) being a non solution. Obviously an "organic block size limit + organic fees" is the ideal solution, but I think the issue is non trivial, and I have no propositions to achieve it. I don't even know if its philosophically possible.

In this light, a "pseudo elastic block size limit + organic fees" is the better and most accessible solution at the moment, and I will argue that my proposal cannot be reduced to "no block size limit + hard fees", and that it actually falls under the same category as yours. Indeed, like your proposal, mine relies on an exponential function to establish the fee expended to block size ratio. Essentially the T-2T range remains, where any blocks below T needs no fees to be valid, and the total fee grows exponentially from T to 2T.

In this regard, my proposal uses the same soft-hard cap range mechanics as yours. As I said, ideally I'd prefer a fully scalable solution (without any artificial hard cap), but for now this kind of elastic soft-hard cap mechanic is better than what we got and simple enough to review and implement. The fact that my solution has caps implies there will be competition for fees as long as the seeding constants of the capping function are tuned correctly. On this front it behaves neither worse nor better than your idea.

Since I believe fees should be pegged on difficulty, fees wouldn't be hard coded either. Rather the baseline would progress inversely to network hashrate, while leaving room for competition over scarce block room.

I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.

I try to not be so quick with drawing such conclusions. I'm not savvy with the Core codebase, but my experience with blockchain analysis has taught me that the less complicated a design is, the more room for optimization it has. You can't argue that adding a verification mechanic will simplify code or reduce verification cost, although the magnitude of the impact is obviously relevant. I'm not in a position to evaluate that, but I would rather remain cautious.

The point still remains, you don't need a fee pool to establish a relationship between fee, block size, and possibly difficulty.

Don't get me wrong, I believe the idea has merits. What I don't believe is that these merits apply directly to the issue at hand. It can fix other issues, but other issues aren't threatening to split the network. I also don't think this idea is mature enough.

As Gavin says, without an implementation and some tests it is hard to see how the system will perform. If we are going to “theorycraft”, I will attempt to keep it as lean as possible.

It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.  

The coinbase of the transaction must pay BTC to OP_TRUE as its first output.  Even if there is no size penalty, the output needs to exist but pay zero.

The second transaction must be the fee pool transaction.

The fee pool transaction must have two inputs; the coinbase OP_TRUE output from 100 blocks previously and the OP_TRUE output from the fee pool transaction in the previous block.  

The transaction must have a single output that is 99% (or some other value) of the sum of the inputs paid to OP_TRUE.


By ignoring fees paid in the block, it protects against miners using alternative channels for fees.

It seems your implementation pays the fee pool in full to the next block. That defeats the pool purpose in part. The implementation becomes more complicated when you have to gradually distribute pool rewards to “good” miners, while you keep raking penalties from larger blocks.

Otherwise, a large block paying high penalties could be followed right away by another large block, which will offset its penalties with the fee pool reward. The idea here is to penalize miners going over and reward those staying under the soft cap. If you let the miners going over the cap get a cut of the rewards, they can offset their penalties and never care for the whole system.

As a result you need a rolling fee pool, not just a 1 block lifetime pool, and that complicates the implementation, because you need to keep track of the pool size across a range of blocks.
legendary
Activity: 1036
Merit: 1000
It's difficult to know exactly how the quantitative factors will play out exactly. The inelasticity is not total, but I believe it is significant, and contributes to the phenomenon. Even if things will not be as catastrophic as Mike describes, I believe they can get rather ugly, so any change that alleviates it is welcome.

Rusty Russell did some modeling on that today: What Transactions Get Crowded Out If Blocks Fill?
donator
Activity: 2058
Merit: 1054
My email correspondence with Gavin so far:
lol he just called you an ideas man, your efforts are futile, Bitcoin is the Eldorado and has no flaws no dev will ever adopt something made by other coins, it would imply Satoshi was slight equivocated in his first attempt at blockchain Cheesy
That's not really what he said. I am mostly an idea man though, and happy to be one.


Holy Cow I was unaware of this thread, thanks I have some reading to do at work now.

And the initial reason I commented on this post is because nobody else had and it would have been a shame to see it fall off the first page with no responses.
Well, I think there is some policy about writing comments that don't add new content of their own and just support previous content, or something. I was happy to see your feedback. As I recall, you posted your comment quite early, before there was a real need for bumping, but I appreciate the intent.


This is quite the idea... It has definitely some legs to run the long run.

What bothers me is really how to implement it... I'm also not knowledgeable in code, but the pool seems a pretty complex thing to setup. The funds would have to reside in an address. Who would hold such private key?
No, the funds don't reside in an address. That's like saying that the 6.8 million bitcoins that haven't been mined yet reside in an address.

The funds just exist as a feature of the network, and the protocol defines how they are paid out to future miners (in the same way that the protocol dictates that each miner currently gets 25 BTC).

I don't believe the implementation is that complicated, but people more familiar with the codebase are in a better position to comment on that.
legendary
Activity: 3010
Merit: 8114

Holy Cow I was unaware of this thread, thanks I have some reading to do at work now.

And the initial reason I commented on this post is because nobody else had and it would have been a shame to see it fall off the first page with no responses.
sr. member
Activity: 350
Merit: 250
What bothers me is really how to implement it... I'm also not knowledgeable in code, but the pool seems a pretty complex thing to setup. The funds would have to reside in an address. Who would hold such private key?

Gavin ofc, he owns Bitcoin now like his ideological fellow at darkcoin/dash.
legendary
Activity: 1512
Merit: 1012
This is quite the idea... It has definitely some legs to run the long run.

What bothers me is really how to implement it... I'm also not knowledgeable in code, but the pool seems a pretty complex thing to setup. The funds would have to reside in an address. Who would hold such private key?
Pages:
Jump to: