Pages:
Author

Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive - page 2. (Read 14297 times)

legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.

The first time it was fixed was from 32MB, it was reduced to 1MB temporarily until other things were fixed.  Pretty much all the reasons for that have abated since though.
(backstops in front of backstops)

The maximum successfully "tested" is what we have now, 1MB.
and there it sits at the top of the wish list.
https://en.bitcoin.it/wiki/Hardfork_Wishlist


We are at an average of less than 1/3rd of that now?
https://blockchain.info/charts/avg-block-size
https://blockchain.info/charts/avg-block-size?showDataPoints=false×pan=all&show_header=true&daysAverageString=7&scale=0&address=

If we were to extrapolate the growth rate, we are still far from a crisis, or from getting transactions backed up because of this.
This provides opportunity for still better proposals to emerge in the months ahead.
full member
Activity: 182
Merit: 123
"PLEASE SCULPT YOUR SHIT BEFORE THROWING. Thank U"
The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

Thank you for bringing this perspective so eloquently. There are in my limited knowledge only 6 options to scalability :

1. with the fees, it will be adjusted automatically (don't pay enough, no tx for you) / BEST OPTION
2. bigger blocks
3. faster blocks
4. alts
5. data compression (it will fit in those 640ko btw)
6. dynamic blocks (everything change depending of the usage)

6. being a little bit complex and that with alts no need to fork! maybe the global payment system is just a pipe dream... but a global payment system why not...
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?

I would stipulate that we agree that both Gavin's first and second solution are an improvement over the current code, I'd further opine that the second is a better guess even than the first.
I would maintain that our best so far is still a horrible miss of an opportunity.  With any luck we won't get another opportunity on this one in quite a while.  It is not a good solution, but it can get us at least up to the next time it has to be adjusted.

It is probably a different question whether to make a change, and if so when.  And another question as to whether there is a consensus to do so.

The answer to both might be in the same little bit of work.

In order to increase predictability, we might want to have some criteria for looking at this parameter, not just for now, but also for future? 
We have done the expedient before, in changing it,
Each time should continue to be an improvement over the last.  It is a patch not a fix, and it will probably last longer than what came before.
It is far less than Satoshi's suggestion.  We should recognize that it very well may need to change again.


Your questions, David are good ones.  They suggest the way to answer it may be in a few other questions:

If the plan is to keep changing MAX_BLOCKSIZE whenever we think MAX_BLOCKSIZE is awry, how does one know when MBS is off? 
What defines a crisis sufficient to get easy consensus?

Or put another way:
How how to measure risk of preventing legitimate transactions?  When risk is high enough we do this again.


Answering these satisfactorily, would likely foster an easy consensus.

This would also be a step toward the design goals, discussed on the last page.
If we get those defined, ahead of hitting that change criteria, we may yet end up with something still better.
hero member
Activity: 709
Merit: 503
Um, is that it?  How do we know if we've reached consensus?  When will the version with the increased MAX_BLOCKSIZE be available?
legendary
Activity: 1050
Merit: 1002
It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.

Awesome. Like I said, I'm happy for you to keep searching. If I can count you in for passing a 40% solution in the meantime I'll be your best friend Wink
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalksearch.org/topic/m.8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable Smiley


The 40% per year starting at 20MB as a half measure.
Its an improvement over the first round of 50% but is still picking some numbers, with some justification, arbitrarily guessed.

We aren't seeking "legendary" nor "ideal", but thank you for your rhetoric, and also for being a solidly reliable unvarying advocate for whatever the loudest voice says.  
I know I can rely on you for that, if any of the better suggestions catch traction, that you will just pile on with whichever you think is likely to get consensus.
You are also very reasonable, and your reasons clear:  Seek consensus.  Attack dissension.

I don't need to be right.  I am just as happy to be wrong, the happiness comes from improvement.

It will be nice to have this 40% solution in pocket as the minimum quality, temporary patch, while a fix may be devised that would not need future adjustment.
The max block size was already reduced once for being too large, and also once for being too small.  It isn't as though we haven't been here before, it would be a good one to see solved eventually.
legendary
Activity: 1050
Merit: 1002
2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.

Awesome. We're looking good for 40% annual increases Smiley

As Cubic Earth said we don't need 100% consensus. We just need general consensus. Let's try to keep rallying around a 40% increase game plan. The more people trumpet it the more it becomes the agreed upon way forward.
hero member
Activity: 815
Merit: 1000
I will just weigh in on this as someone who has worked with Bitcoin merkle trees in practice:

1. The limit could be infinite and we would be fine. Hence I support any proposal to increase block size as much as possible.
2. That said a slow ~40% growth rate gives us time to improve the clients to scale nicely. Again I give full support to this.
3. The things that makes this possible is swarm nodes and aggressive merkle tree pruning.

There are two hard forks needed in Bitcoin this is the first. The next will be more decimals. Nothing else I know about is needed.
(Made sure of that before I jumped the wagon you know Wink)

Scaling details:
Swarm nodes:
Put/implemented as SIMPLY as possible (can also be trustless, decentralized and peer to peer) -> Two people run a "half" node each and simply tell each whether their half of the block was valid, boom 2X network capacity.
(Rinse and repeat/complicate as needed Wink)

Aggressive merkle tree pruning:
1. Spent/provably unspendable TXs are pruned.
2. Dust size and rarely/never used unspent TXs can ALSO be pruned by miners -> The owner, should he exist, will just later have to provide the merkle branch leading to the header chain and other TX data at spend time. (Self storage of TX data, not just keys basically)
A miner who does not know about a TX will have to either A not include it in his blocks or B Get the information from someone else.

Security:
Complex issue, but it will be okay. (In another thread I have described how Bitcoin clients can delay miner blocks from miners that NEVER include their valid TXs for instance.)

In general ->
Bitcoin is consensus based, if the issue is serious enough it will be solved. A Bitcoin software "crash" will never happen because all issues will be solved.
In 2010 anyone could spend anyone elses Bitcoin... you probably didn't even know about that right? What happened? -> Nothing; solved and "forgotten".
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Breaking blocks into multiple messages would be a significant code change to go above 32 MB.

So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.

The block segment code is not even needed for a very long time (edit: apart from node bootstrapping).
IBLT blocks of 32MB would support at least 3GB standard blocks on disk, or 20,000 TPS.
hero member
Activity: 709
Merit: 503
So who are we kidding with this?  Are we doing the block segment code now or later?  Bump it to 32MB now to buy us time to do the block segment code.
legendary
Activity: 3878
Merit: 1193
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

There was a 32 MB limit to messages (which I assume still exists), so 32 MB is the max it could simply be raised to right now without further code changes. Breaking blocks into multiple messages would be a significant code change to go above 32 MB.
legendary
Activity: 1050
Merit: 1002
Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

I do remember. I'm really hoping it doesn't take running into the limit to provide impetus to take action. Not only would we likely get negative press as you mention, but it would highlight the issue to people completely oblivious to it. If we can't get people with good knowledge of the subject to agree how would we fare after adding even more noise to the signal?

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalksearch.org/topic/m.8129058

I think half-measures only increase the likelihood we can't get a comprehensive solution.

That 40% for 20 years is more than fine by me :-)

Thanks for your feedback and for being IMO reasonable Smiley
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.

Since the monumental debate which occurred in February 2013 (which you will remember), I have despaired for 18 months that nothing will get done until the limit is actually hit, throughput is crippled for days, and world's press laughs at Bitcoin for years afterward.

Which is why I even argued for a 3MB cap, just to buy some time.

https://bitcointalksearch.org/topic/m.8129058

That 40% for 20 years is more than fine by me :-)
member
Activity: 81
Merit: 10
Maybe this is a stupid question, but...


Miners are in it for the fees (plus mining coins).

Why not just set a minimum fee in relation to the probability of mining a coin?
legendary
Activity: 1050
Merit: 1002
Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.

That's fine by me. My last proposal does this. What does everyone think? I say we start building some idea of what can pass community consensus. We may need to leave NewLiberty searching for the legendary ideal solution.
hero member
Activity: 709
Merit: 503
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?

2) What is the maximum value which has been tested successfully?  Have any sizes been tested that fail?

3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that?
       3.1) What advantage is there to delaying the jump to maximum tested value?

No miner is consistently filling up even the tiny 1MB blocks possible now.  We see no evidence of self-dealing transactions.  What are we afraid of?

Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need.  How will we know we need to jump it up faster?  A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.
legendary
Activity: 1050
Merit: 1002
I don't recall Gavin ever proposed what you are suggesting here.  1st round was 50% per year, 2nd proposal was 20MB + 40% per year, yes?

His Scalability Roadmap calls for 50% annually. He has since mentioned 40% being acceptable, but none of his proposals seem to have been accepted by you, so what's the difference? Do you feel 40% is better than 50%? Would 30% be even better? What is your fear?

That's why I asked you for a bullet point list (not a paper), to get an idea of your thinking in specifics of concern.

I'm less a fan of voting than you might imagine.  
It is mostly useful when there are two bad choices rather than one good one, and a choice is forced.  I maintain hope for a good solution yet.  To give us an easy consensus.

This is the problem I have with you. You seem to think there is some mystical silver bullet that simply hasn't been discovered yet, and you implore everyone keep searching for it, for once it's found the population will cheer, exalt it to the highest and run smiling to the voting booths in clear favor. That is a pipe dream. Somebody is always going to see things differently. There is no ideal solution because everything is subjective and arbitrary in terms of priority of the advocate. The only ideal solution is to remove all areas of concern, meaning no cap at all but with everyone in the world having easy access to enough computing resources to keep up with global transaction numbers. That's not our situation so we have to deal with things as best we can. Best, again, is completely subjective. The people at keepbitcoinfree.org don't want to change the 1MB now at all. They think, for Tor and other consideration, it's necessary, but I agree with Syke that not everyone needs to be able to run a full node.

The idea has a lot of negatives.  Possibly its fixable.
Thank you for bringing forward the suggestion.

Then suggest something. At least I tried moving in a direction toward your priority. Can we see that from you? Again - what I think is most important involves some measure of simplicity and predictability. We're building a global payment system, to be used potentially by enormous banks and the like, this isn't aiming to be some arbitrary system for a few geeks trading game points.
hero member
Activity: 672
Merit: 504
a.k.a. gurnec on GitHub
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.

A "bitcoin enthusiast" is not everyone. See Gavin's definition I just quoted above. Without at least some limit, bitcoin nodes become centralized. Also, the suggested exponential upper limits seem very unlikely to limit bitcoin growth.
full member
Activity: 182
Merit: 123
"PLEASE SCULPT YOUR SHIT BEFORE THROWING. Thank U"

This is a common optimisation virtually all crappy pools use shortly after a new block since their software can't scale to get miners to work on the new block full of transactions quickly, they just broadcast a blank sheet for the first lot of work after a block change. Most pools blame bitcoind for being so slow to accept a block and generate a new template, and this is actually quite slow, but it's obviously more than just this (since I don't ever include transaction-free blocks in my own pool software).

It's a personal honor to read from you.  Shocked
legendary
Activity: 3878
Merit: 1193
lol @ grandma-cap for Bitcoin
We agree, in this case "grandma" is substituting for "bitcoin enthusiast" WRT bandwidth availability, I think.
I'm guessing he was thinking that our enthusiast might be living with grandma, or maybe is grandma, IDK?

Some people want everyone to run a full node. I'm suggesting that's not a good idea. We should not limit bitcoin growth such that everyone can run a full node. Not everyone needs to run a full node to benefit from bitcoin.
Pages:
Jump to: