Author

Topic: Why not increase the block size limit by 1MB whenever it needs to be increased? (Read 1277 times)

legendary
Activity: 1260
Merit: 1008

Needs to be increased is tricky. The natural and necessary state for blocks is nearly full; defining need is hard.  "Near-universally agreed to be good to increase" would be better, but people are sensibly worried that it would be held back by unreasonable people and so they are unwilling to take that risk.
The essence of Bitcoin's current problem can be found in this post.  The problem isn't the words that I emphasized. It's that a core developer made such a statement.

It is absurd that any real-time system "needs" to run near saturation.  This is so elementary it is hard to believe that a competent system designer would make such a statement.  It is no wonder that some people are questioning motivation rather than technical judgment.

Posts like this make me grind my teeth so hard I think I should start holding a leather strap in my mouth before getting on the internet. Maybe a pillow to scream into would be therapeutic.

Everybody seems to be talking past each other right now, and not getting a good grip on what the actual problems are concerning block size increases.

Here is another analogy, that will probably be useless;
( snipped good box analogy)

I guess the primary problem with this debate is the unknown of whether the fees will be capable of supporting the mining required for network security. And its not necessarily a yes / no, its a matter of degree. If the box is huge, and its "all aboard" for a penny, does it add up? And if the box is tiny, and its "all aboard" for a kajillion dollars, does anyone use the box? There might be economic modeling that could be done, but we won't really know what will happen until there's an adjustment in block reward. Luckily, there's a built in experiment coming up with the block halving. Theoretically, reduced reward should cause the price of bitcoin to increase, because miners need to recoup the same amount of fiat expended to acquire a smaller amount of bitcoin. Theoretically, reduced block reward should cause miners (who are we kidding, i mean pool operators) to increase the minimum fee requirement to get into a block.

Furthermore, its unknown what the ultimate function of bitcoin will be. If its an everyday money / payment system, large blocks with many tiny fees make sense. If its a high tier master ledger remittance value-storage network, small blocks with few large fees make sense.

So, to me, these are the unknowns that make the debate almost moot.

1. Will fees support the network.
2. What is bitcoin.

edited to add: because the inherent value is the decentralized, no-one-controls-this aspect (except, of course, the pool ops), I believe its a high tier master ledger remittance value storage network, so I guess I fall in the 1 MB block 4evah camp.
sr. member
Activity: 433
Merit: 267

Needs to be increased is tricky. The natural and necessary state for blocks is nearly full; defining need is hard.  "Near-universally agreed to be good to increase" would be better, but people are sensibly worried that it would be held back by unreasonable people and so they are unwilling to take that risk.
The essence of Bitcoin's current problem can be found in this post.  The problem isn't the words that I emphasized. It's that a core developer made such a statement.

It is absurd that any real-time system "needs" to run near saturation.  This is so elementary it is hard to believe that a competent system designer would make such a statement.  It is no wonder that some people are questioning motivation rather than technical judgment.

Posts like this make me grind my teeth so hard I think I should start holding a leather strap in my mouth before getting on the internet. Maybe a pillow to scream into would be therapeutic.

Everybody seems to be talking past each other right now, and not getting a good grip on what the actual problems are concerning block size increases.

Here is another analogy, that will probably be useless;
We want a community box that can hold as much weight and volume as we might want, and we want the average person to be able to carry the box unaided and without relying on some elite ubermensch to do it for us.
The only method we have to pay people to carry the box is by customers bidding for space in it. If the box becomes too big in relation to the demand to put stuff in the box, then the average person won't have any incentive to carry it.

So somebody manages to make a fairly small box that pretty much works for the most part, however we'd like to increase the size of the box if the average person will still carry it, and if profits are still good. No one owns the box though, so no one can make that decision. If nearly everyone votes to increase the size of the box, only then it will be increased.

Many of those voters are more than happy to make the box as big as possible, only being able to be carried by the elite ubermensch. However, many others refuse, saying that this goes against what the box was designed for in the first place! Not only that but the profits have been diminishing, and less and less people have been willing to carry the box lately.

They can't agree so the people that want a huge box instead decide they'll have a vote and if 75% agree we'll increase the box size anyway. Then the lately absent creator of the box charges in and says, "Hey woe, what the Hell dudes? You can't just impose your will on the 25%. Even a reasonable minority shouldn't be ignored."

And then somebody invented anti-gravity or something so the box was always weightless. There was much rejoicing. The big box people insist that their opponents are being, "absurd", aren't "competent", and can't be trusted with the box. They're sure they'll get the people together to outvote the ignorant idealists, and make the change whether they like it or not!

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010238.html
http://wiki.mises.org/wiki/Economic_calculation_problem
http://wiki.mises.org/wiki/Tragedy_of_the_Commons
sr. member
Activity: 278
Merit: 254

Needs to be increased is tricky. The natural and necessary state for blocks is nearly full; defining need is hard.  "Near-universally agreed to be good to increase" would be better, but people are sensibly worried that it would be held back by unreasonable people and so they are unwilling to take that risk.
The essence of Bitcoin's current problem can be found in this post.  The problem isn't the words that I emphasized. It's that a core developer made such a statement.

It is absurd that any real-time system "needs" to run near saturation.  This is so elementary it is hard to believe that a competent system designer would make such a statement.  It is no wonder that some people are questioning motivation rather than technical judgment.
legendary
Activity: 3472
Merit: 4801
Sure, it could be done automatically, but it would need to be able to know when someone is trying to force it to update.

Oh.. Okay, so they want 8MB because it would take a while to get everyone the updated version.

What if the client already supports 8MB, but there's a limit on the network that only supports, ex 2MB? (I have no idea what I'm talking about)

Any time the current software has a "limit" (meaning that it refuses to accept anything beyond that limit), and new software creates and accepts something that is beyond that limit which affects the state of the blockchain, it is a "hard fork".  That is pretty much the definition of a "hard fork".  It will always result in those running the new software and those running the old software disagreeing on which blocks exist and which transactions have been confirmed.  It will eventually result in those running the new software and those running the old software disagreeing on which transactions are valid.
newbie
Activity: 23
Merit: 0
Sure, it could be done automatically, but it would need to be able to know when someone is trying to force it to update.

Oh.. Okay, so they want 8MB because it would take a while to get everyone the updated version.

What if the client already supports 8MB, but there's a limit on the network that only supports, ex 2MB? (I have no idea what I'm talking about)
legendary
Activity: 3472
Merit: 4801
- snip -
Why would it be a new hard fork every time the size is updated? If the market needs it, ie enough blocks are above 0.5MB, time to update to 2MB.

Wouldn't it just be a normal update?
- snip -

If the update isn't going to happen automatically, then every current node sees the current limit as exactly that, a limit.  Therefore, they will reject any and all blocks that are larger.

If the software is updated to allow a larger block, then everybody that doesn't immediately update to the new software (or who doesn't update by whatever deadline you might choose) will reject any and all blocks that are created by the new software.  The blockchain will "fork", and if the new software has enough acceptance the blockchain will remain split since the "old" miners won't be able to overcome the "new" miners blocks.  This sort of change (where old nodes refuse to accept transactions and/or blocks that are created by new nodes) is the definition of a hard fork.

Each time the software needs to be updated like this, there will be a significant number of individuals that each think that they have the best idea on how to handle it "this time".  Each of those individuals will get a group of others that agree with them and support them.  We will go through this disagreement and discussion again and again.
newbie
Activity: 23
Merit: 0

If you are talking about writing the code so that it automatically increases, then the problem is that any malicious actor that wants to expand the limit VERY quickly could just make sure that ALL of the blocks that they mine are completely full every time (by filling it with transactions that pay no fees and send bitcoins from themselves to themselves).

If you are talking about creating a new hard fork release every time, do you really want to go through this messy debate over, and over, and over, and over every time the block needs to be expanded again?

I figured that would be the reason not to do it automatically. But I don't see the problem doing it manually.

Why would it be a new hard fork every time the size is updated? If the market needs it, ie enough blocks are above 0.5MB, time to update to 2MB.

Wouldn't it just be a normal update?


Quote
Wouldn't doing it this way increase the block size whenever the market needs it to?

Needs to be increased is tricky. The natural and necessary state for blocks is nearly full; defining need is hard.  Near-universally agreed to be good to increase would be better, but people are sensibly worried that it would be held back by unreasonable people and so they are unwilling to take that risk.

Quote
Ex, when it reaches 0.5MB, raise it to 2MB. When it reaches 1.5MB, raise it to 3MB.
That would completely undermine the existance of transactions fees, which are the only long term security arguement we have. It would also allow the system to slip into arbritary amounts of centeralization if those increases were really guarenteed... because it could get to a point where no one but a few api providers bothered to run nodes. Sad

Are these issues also relevant to 8MB blocks?
staff
Activity: 4284
Merit: 8808
Quote
Wouldn't doing it this way increase the block size whenever the market needs it to?

Needs to be increased is tricky. The natural and necessary state for blocks is nearly full; defining need is hard.  "Near-universally agreed to be good to increase" would be better, but people are sensibly worried that it would be held back by unreasonable people and so they are unwilling to take that risk.

Quote
Ex, when it reaches 0.5MB, raise it to 2MB. When it reaches 1.5MB, raise it to 3MB.
That sounds like something that would completely undermine the existance of transactions fees-- which are the only long term security arguement we have (what will pay for adequate POW security as the subsidy declines). It would also allow the system to slip into arbritary amounts of centeralization if those increases were really guarenteed... because it could get to a point where no one but a few api providers bothered to run nodes. Sad
legendary
Activity: 3472
Merit: 4801
I've read some articles about the 1MB vs. 8MB, and I still don't know half of it..

Wouldn't doing it this way increase the block size whenever the market needs it to?

Ex, when it reaches 0.5MB, raise it to 2MB. When it reaches 1.5MB, raise it to 3MB.

There's probably a good explanation for why this wouldn't work, but I couldn't find it.

If you are talking about writing the code so that it automatically increases, then the problem is that any malicious actor that wants to expand the limit VERY quickly could just make sure that ALL of the blocks that they mine are completely full every time (by filling it with transactions that pay no fees and send bitcoins from themselves to themselves).

If you are talking about creating a new hard fork release every time, do you really want to go through this messy debate over, and over, and over, and over every time the block needs to be expanded again?
newbie
Activity: 23
Merit: 0
I've read some articles about the 1MB vs. 8MB, and I still don't know half of it..

Wouldn't doing it this way increase the block size whenever the market needs it to?

Ex, when it reaches 0.5MB, raise it to 2MB. When it reaches 1.5MB, raise it to 3MB.

There's probably a good explanation for why this wouldn't work, but I couldn't find it.
Jump to: