Pages:
Author

Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive - page 3. (Read 14303 times)

hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
Much mining can be done with a single node.  Costs are discrete between nodes and mining[,] and [those costs are] asymmetric.
...
People look at hashrate to determine network health and not so much at node population and distribution, but both are essential.

(Text in brackets added by me to indicate what I understood you to be saying.) Agreed.

If costs for node maintenance overwhelm the expected rewards at

We lose nodes per hashrate, which is bad and leads to (or rather continues the practice of) miners selling their votes to node operators, but I don't see how we lose hashrate, we just centralize control of hashrate to amortize node maintenance costs (still bad).

In later years, there is a very long chain, and most coins transacted will be the most recent.
...
We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

So long as the grandma-cap can be maintained, it seems like all of your discussion would already be covered. The hope has always been that new techniques (IBLT, tx pruning, UTXO commitments, etc.) will keep this possible.

However there is no way to see into the distant future. Any chosen grandma-cap could be incorrect, and any cap more restrictive than that to meet #3 could also be incorrect. I don't disagree that #3 is desirable, only that it may not be implementable. Having said that, as long as a more restrictive cap has little to no chance of interfering with #2 (never prevent a miner from including a legitimate tx), I'd have no problem with it.

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.

TL;DR - This goal implies the "only permit an exponential increase in the max blocksize during periods of demand" rule in your initial example, correct?
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject; no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.

I think you are missing the point entirely on #3, probably my fault for being overly brief there and not really explaining the point in this context.

The artificial cap on block size would fail the test of #3.  So would a too high max block size if node maintenance storage costs make processing transactions unfeasible if only supported by TX fees.  We have never seen a coin yet that survives on transaction fee supported mining.  Bitcoin survives on its inflation.  What is sought there is to compensate at an appropriate level.  We don't know what that level is, but it may be something like a fraction of a percentage of all coins.
Currently TX fees are 1/300th the miner compensation.  After the next halving, we may be around 1/100 if TX continue to grow.  Fees will still be well within marginal costs and so not significant still.
This is fundamentally a centralisation risk, and a security risk through not creating perverse incentives.  

Much mining can be done with a single node.  Costs are discrete between nodes and mining and asymmetric.  If costs for node maintenance overwhelm the expected rewards at
It is not so much an artificial limit created for profitability, it is a technical limit to preserve network resilience through node population and distribution by being sensitive to the ratio.  Much of the discussion on blocksize economics treats mining and node maintenance as the same thing.  They aren't the same thing at all.  Its more a chain length vs hashrate issue.
In later years, there is a very long chain, and most coins transacted will be the most recent.  Old coins are meant to get to move for free, this reduces the UTXO block depth.  We don't know how it will play out, its uncharted territory.  #3 is more about not creating a perverse incentive to unbalance this that doesn't materialize until the distant future than about encouraging compensation through artificial constrain on supply.

For better clarity I should swap #3 for

3) provide conditions conducive for node maintenance and mining when the transaction fees are supporting the network by avoiding perverse incentives.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post, but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?

Ah yes, the backstop reference, grandma at the ball park watching grandkid play, protected by the backstop.

Its that fence behind the kid, that protects them from the wild pitch and thrown bat.
hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Actually I picked the term up from NewLiberty's post, but yes that's what I was assuming it meant. Should the term "grandma-cap" make it into the BIP?
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?
hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).

It seems to me that (1) and (2) could both be implemented with either the static (Gavin's) method or some reactive method, although the I suspect the reactive method can do (1) more safely/conservatively. If a reactive method can do (2) safely enough (I suspect it could), I'd prefer it. A reactive method seems much more likely to meet (4).

If I understand you correctly, (3) takes us back to an artificial cap on block size to prevent a perceived, as Gavin put it, "Transaction Fee Death Spiral." I've already made my rant on that subject; no need to repeat it.

I'm of the opinion that reaching consensus on (3) is more important, and possibly more difficult, than any static-vs-reactive consensus. (3) is an economic question, whereas static-vs-reactive is closer to an implementation detail.
hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.

Although we can argue about details, we (or at least I) have been using "grandma" as shorthand for "Bitcoin hobbyist", which Gavin had equated to "somebody with a current, reasonably fast computer and Internet connection, running an up-to-date version of Bitcoin Core and willing to dedicate half their CPU power and bandwidth to Bitcoin." Is that reasonable?
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this.

Gee. When gmaxwell said that there was a lot of low hanging fruit, in terms of possible improvements, perhaps it was not obvious just how low and how dangling some of that fruit actually is.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?
This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).
legendary
Activity: 3878
Merit: 1193
If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.

I'd rather not implement a grandma-cap on bitcoin's growth. Grandma doesn't need to run a full node. She can use an SPV or other thin client.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
My (ideal) goals, in particular, would be to (1) never kick out grandma, and (2) never prevent a minor from including a legitimate transaction. (edited to add: those are in priority order)

We share these design goals, and priority.  They aren't comprehensive for me though.

I'd add (at least)
3) provide conditions conducive for mining when the transaction fees are supporting the network  
4) avoid future changes on the same issue.
And of course avoid introducing new unmitigated risks (more of a criteria than a goal).


If we wanted to be brutally pedantic on ourselves we could kick around the definitions of who grandma might be, and what makes a transaction legitimate, but I agree with the sentiment entirely.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?

I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?


I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This flag gives only miners the votes?  This doesn't seem better than letting the transactions or the miner fee be the votes?
Its better than a bad idea though, as it does provide some flexibility and sensitivity to future realities and relies on proof of work for voting.
It fails the test of being a self-regulating approach, and remains based on arbitrary guesses.
So I don't think it is the "best" of either world, but also not the worst.  More like an engineering trade-off.

Presumably this is counting years by blocks, yes?
This would give 100MB max blocks size in 2018 and gigabyte 6 years later, but blocks are coming faster than the years, so wouldn't likely take that long.

At such increases, Bitcoin could support (current) Visa processing peak rates within a decade, and a lot sooner if the votes indicate faster and the block solving doesn't slow too much.  (perhaps as soon as 6 years, by 2020)

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.
legendary
Activity: 1050
Merit: 1002
NewLiberty, we can continue back and forth trying to sway one another and who knows how that will turn out. How about the following compromise:

We implement Gavin's plan - go to 20MB blocks and 50% annual increases thereafter. That is the default. However, we add a voting component. We make it possible to restrain the increase by say 1/2 if enough blocks contain some flag in the block header. It could also be used to increase scheduled increase by 1/2 if the model is too conservative for computing growth. There was a header variable mentioned before I think in the block size debate, the first time around.

I think this is the best of both worlds. It provides a measure of predictability, and simplicity, while allowing the community to bend capacity more inline with the growth of the time if needed. What do you think?
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
hero member
Activity: 709
Merit: 503
I'm trying to build the JMT queuing model of Bitcoin.  What is the measured distribution http://en.wikipedia.org/wiki/Probability_distribution of times between blocks?  https://blockchain.info/charts/avg-confirmation-time are points averaged over 24 hours which isn't helping me see it.  I know the target is 10 minutes but it's clear that is not being achieved consistently.
hero member
Activity: 709
Merit: 503
Hmm, it came only 19 seconds (if the timestamps can be trusted) after the previous one; lucky guy.
hero member
Activity: 709
Merit: 503
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block?  Could the pool have been empty from his point of view?

Miner algorithm: listen for a block to be broadcast and immediately begin searching for the next block with only their coinbase transaction in it, ignore all other transactions.  Is there some sort of advantage to ignoring the other transactions?
full member
Activity: 182
Merit: 123
"PLEASE SCULPT YOUR SHIT BEFORE THROWING. Thank U"
Finally good news, even Satoshi told the 1MB limit is temporary.

But easier would be square maxblocksize every block reward halving, to make Bitcoin simple...

50 BTC = 1 MB
25 BTC = 2 MB
12.5 BTC = 4 MB
6.25 BTC = 16 MB
3.125 BTC = 256 MB


and so on.

(edit by me in bold)

you're welcome.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
NewLiberty, you seem to be ignoring me.

Your sticking point, in my mind, is less about solving this issue than it is you feel people are not taking adequate time to find an input based solution to "fix it right".

As I said before my goal isn't to be right. It's to find a solution which can pass the community so we're not stuck. Ideally it also meets Bitcoin's promises of decentralization and global service. I made a bullet point list outlining my thinking on the two proposals, but please note I didn't refer to any specific plan from you. I said any input based solution, which implies any taking accurate measurements too - lack of consideration in uncovering such isn't relevant. I fundamentally think that approach wouldn't work as well for reasons I outlined.

Would you make a bullet point list of your likes and dislikes on the two proposed paths so we can at least see in a more granular way where our beliefs differ?
Oh?  And I thought you were ignoring me.

I understand your goal, and your ossification fears.  I don't mean to be ignoring you, only thought this was already fully addressed.

If your ossification fears are justified (and they may be), then (I would argue) that it is more important to do it right than to do it fast, as the ossification would be progressive, and more difficult in years to come.
I understand your position to be that a quick fix to patch this element is needed, that we are at a crisis, and it may be now or never.
I disagree.  If it were a crisis, (even in an ossified state) consensus would be easy, and even doing something foolish would be justified and accepted broadly.  

Unless you are Jeremy Allaire, I probably want this particular issue fixed even more than you do, but I would rather see it fixed for good and all, than continuously twiddled with over the decades to come.

To your bullet point assignment...  maybe.
One of my publishers has been pestering me for a paper so I will likely 'write something'.  I'll try not to point to it and say "but didn't you read this" as if it were the definitive explanation of everything, because it surely will not be that.
legendary
Activity: 1050
Merit: 1002
NewLiberty, you seem to be ignoring me.

Your sticking point, in my mind, is less about solving this issue than it is you feel people are not taking adequate time to find an input based solution to "fix it right".

As I said before my goal isn't to be right. It's to find a solution which can pass the community so we're not stuck. Ideally it also meets Bitcoin's promises of decentralization and global service. I made a bullet point list outlining my thinking on the two proposals, but please note I didn't refer to any specific plan from you. I said any input based solution, which implies any taking accurate measurements too - lack of consideration in uncovering such isn't relevant. I fundamentally think that approach wouldn't work as well for reasons I outlined.

Would you make a bullet point list of your likes and dislikes on the two proposed paths so we can at least see in a more granular way where our beliefs differ?
Pages:
Jump to: