Pages:
Author

Topic: What is the best block size limit? (Read 1387 times)

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 15, 2015, 12:24:48 PM
#37
Looking at the debate going on about this and the indecision on this matter, we can conclude that bigger blocks is necessary, but the implementation is questionable.

The Bitcoin scene are full of talented developers and engineers, but nobody has found a way to automate this feature yet? The Blockchain could adjust it's own block size

when specific targets are reached? Let's say transaction volumes reach a certain level, it should be able to adjust the size and request a BIP or send out a automatic

alert? This is just the theory... ffs we have send people to the moon, and landed on Mars... how difficult can it be? 

what if the system becomes self aware and then kills off any threat to its survival ( Bitcoin is SKYNET  Shocked Shocked Shocked)
hero member
Activity: 546
Merit: 500
September 15, 2015, 12:22:39 PM
#36
It is the fact that block propagation is slow that makes it costly for a miner to even attempt to produce an 8 MB block.  I estimate that, given the current network propagation impedance, a miner would only be wise to attempt to publish an 8 MB block if it contained 3 to 6 BTC of fees (due to his increased risk of orphaning).  

As block propagation improves, it gets cheaper for miners to build larger blocks.  There is a natural balance that occurs without the need for a tight limit.  This is how Bitcoin has always worked.  The free market solves the block size problem without centralized intervention.  

Emphasis mine.

1) The key word here is "current network". The 1MB limit was baked into the brand by Satoshi himself, that's why miners don't even dare to think consolidating on higher bandwidth lanes, because if they do they will simply go and fork themselves out of the system.  Grin

2) A "natural balance" might begin shifting towards higher and higher bandwidth, but because the process is smooth and slow, it will push home users one by one without them even noticing it. By the time they are almost all out, they will have already lost their favorite toy forever.

3) Bitcoin has "never worked" without there being a single static hard limit (of 1MB) firmly cemented into the brand itself. The challenge we are currently facing is how to melt it carefully, shift it far but safe enough and let it solidify all without the original founder of the project. It's our call. We really need to agree.
1) This is not correct, Satoshi had always intended to increase the blocksize, the one megabyte limit was only meant as a temporary measure.

2) Increasing the blocksize will not effect home miners whatsoever because home miners do not run full nodes for mining. A home miner could literally mine over a 56k connection anywhere in the world even with 8GB blocks.

3) This is also incorrect since Bitcoin has also worked without this limit, since the limit was set at 32 megabyte earlier in its history. Furthermore Bitcoin has never operated under an economy of completely full blocks, this would be a radical departure from the original vision and promise of Bitcoin.  

1) Of course he did, but the limit in question forces current big miners to target the "current network" only because the rules say so. If we relax that, they might as well begin shifting their operations onto the higher bandwidth layers, where they can extract more profit. So we must be honest with ourselves here.

2) I was talking about home users who run non-mining full nodes (not home miners). The miners (operating full nodes) are profit-driven and they will move themselves wherever they see fit, while non-incentivized other full nodes will likely just stop operating if it becomes too inconvenient for them to continue.

3) I would consider the period when Satoshi was still around as "birthing of Bitcoin" and the moment he left was when Bitcoin was "born". So, in that terminology the 1MB limit has always been there and I argue, that it had more of a psychological effect (to target a particular bandwidth layer) than anything else.

Now, I'm not saying that we shouldn't increase the block size cap. I personally believe that increasing it to 8MB (or a more gentle 2-4-8 approach) is the best way moving forward if we decide to change it at all. If our highest priority, however, is to protect home-based full nodes (which already find it somewhat inconvenient to operate), then we probably shouldn't touch the limit any time soon, but that will have other consequences as competitors will begin catching up.

It's an interesting time in Bitcoin's history.
It's a question of whether it will stay at home (bandwidth-level) or move out into the unknown, face the uncertainty, survive and redefine itself.
You do make some good points, I do not think that increasing the blocksize would lead to more centralization compared to leaving it where it is now, there are many factors to consider.

I do not see how increasing the block size would lead to miners being able to extract more profit, unless you are referring to having a higher possible volume of transactions, which would be a good thing.

You are correct in referring to the one megabyte limit as having a strong psychological effect, these are indeed interesting times for Bitcoins history.
legendary
Activity: 1904
Merit: 1074
September 15, 2015, 12:18:16 PM
#35
Looking at the debate going on about this and the indecision on this matter, we can conclude that bigger blocks is necessary, but the implementation is questionable.

The Bitcoin scene are full of talented developers and engineers, but nobody has found a way to automate this feature yet? The Blockchain could adjust it's own block size

when specific targets are reached? Let's say transaction volumes reach a certain level, it should be able to adjust the size and request a BIP or send out a automatic

alert? This is just the theory... ffs we have send people to the moon, and landed on Mars... how difficult can it be? 
jr. member
Activity: 42
Merit: 1
September 15, 2015, 12:07:30 PM
#34
It is the fact that block propagation is slow that makes it costly for a miner to even attempt to produce an 8 MB block.  I estimate that, given the current network propagation impedance, a miner would only be wise to attempt to publish an 8 MB block if it contained 3 to 6 BTC of fees (due to his increased risk of orphaning).  

As block propagation improves, it gets cheaper for miners to build larger blocks.  There is a natural balance that occurs without the need for a tight limit.  This is how Bitcoin has always worked.  The free market solves the block size problem without centralized intervention.  

Emphasis mine.

1) The key word here is "current network". The 1MB limit was baked into the brand by Satoshi himself, that's why miners don't even dare to think consolidating on higher bandwidth lanes, because if they do they will simply go and fork themselves out of the system.  Grin

2) A "natural balance" might begin shifting towards higher and higher bandwidth, but because the process is smooth and slow, it will push home users one by one without them even noticing it. By the time they are almost all out, they will have already lost their favorite toy forever.

3) Bitcoin has "never worked" without there being a single static hard limit (of 1MB) firmly cemented into the brand itself. The challenge we are currently facing is how to melt it carefully, shift it far but safe enough and let it solidify all without the original founder of the project. It's our call. We really need to agree.
1) This is not correct, Satoshi had always intended to increase the blocksize, the one megabyte limit was only meant as a temporary measure.

2) Increasing the blocksize will not effect home miners whatsoever because home miners do not run full nodes for mining. A home miner could literally mine over a 56k connection anywhere in the world even with 8GB blocks.

3) This is also incorrect since Bitcoin has also worked without this limit, since the limit was set at 32 megabyte earlier in its history. Furthermore Bitcoin has never operated under an economy of completely full blocks, this would be a radical departure from the original vision and promise of Bitcoin.  

1) Of course he did, but the limit in question forces current big miners to target the "current network" only because the rules say so. If we relax that, they might as well begin shifting their operations onto the higher bandwidth layers, where they can extract more profit. So we must be honest with ourselves here.

2) I was talking about home users who run non-mining full nodes (not home miners). The miners (operating full nodes) are profit-driven and therefore will move themselves wherever they see fit, while non-incentivized other full nodes will likely just stop operating if it becomes too inconvenient for them to continue.

3) I would consider the period when Satoshi was still around as "birthing of Bitcoin" and the moment he left was when Bitcoin was "born". So, in that terminology the 1MB limit has always been there and I argue, that it had more of a psychological effect (to target a particular bandwidth layer) than anything else.

Now, I'm not saying that we shouldn't increase the block size cap. I personally believe that increasing it to 8MB (or a more gentle 2-4-8 approach) is the best way moving forward if we decide to change it at all. If our highest priority, however, is to protect home-based full nodes (which already find it somewhat inconvenient to operate), then we probably shouldn't touch the limit any time soon, but that will have other consequences as competitors will begin catching up.

It's an interesting time in Bitcoin's history.
It's a question of whether it will stay at home (bandwidth-level) or move out into the unknown, face the uncertainty, survive and redefine itself.
hero member
Activity: 546
Merit: 500
September 15, 2015, 11:07:35 AM
#33
Any Idea on what is the best number? or a dynamic one will do.
please share your Idea. Thanks
People want different blockchain limits or don't want changes at all. And I will tell you what is the best answer: all we need to do is to keep bitcoin running without problems, if block limit in the future needs to be adjusted just do it. Don't waste time arguing how to do it.

Agreeing is pretty much a must for evolutionary stage transitions.
Bitcoiners need to move as a whole with decent coherency, or else...
we will just "accidentally the whole thing".  Grin

It is the fact that block propagation is slow that makes it costly for a miner to even attempt to produce an 8 MB block.  I estimate that, given the current network propagation impedance, a miner would only be wise to attempt to publish an 8 MB block if it contained 3 to 6 BTC of fees (due to his increased risk of orphaning).  

As block propagation improves, it gets cheaper for miners to build larger blocks.  There is a natural balance that occurs without the need for a tight limit.  This is how Bitcoin has always worked.  The free market solves the block size problem without centralized intervention.  

Emphasis mine.

1) The key word here is "current network". The 1MB limit was baked into the brand by Satoshi himself, that's why miners don't even dare to think consolidating on higher bandwidth lanes, because if they do they will simply go and fork themselves out of the system.  Grin

2) A "natural balance" might begin shifting towards higher and higher bandwidth, but because the process is smooth and slow, it will push home users one by one without them even noticing it. By the time they are almost all out, they will have already lost their favorite toy forever.

3) Bitcoin has "never worked" without there being a single static hard limit (of 1MB) firmly cemented into the brand itself. The challenge we are currently facing is how to melt it carefully, shift it far but safe enough and let it solidify all without the original founder of the project. It's our call. We really need to agree.
1) This is not correct, Satoshi had always intended to increase the blocksize, the one megabyte limit was only meant as a temporary measure.

2) Increasing the blocksize will not effect home miners whatsoever because home miners do not run full nodes for mining. A home miner could literally mine over a 56k connection anywhere in the world even with 8GB blocks.

3) This is also incorrect since Bitcoin has also worked without this limit, since the limit was set at 32 megabyte earlier in its history. Furthermore Bitcoin has never operated under an economy of completely full blocks, this would be a radical departure from the original vision and promise of Bitcoin.  
jr. member
Activity: 42
Merit: 1
September 14, 2015, 03:38:02 PM
#32
Any Idea on what is the best number? or a dynamic one will do.
please share your Idea. Thanks
People want different blockchain limits or don't want changes at all. And I will tell you what is the best answer: all we need to do is to keep bitcoin running without problems, if block limit in the future needs to be adjusted just do it. Don't waste time arguing how to do it.

Agreeing is pretty much a must for evolutionary stage transitions.
Bitcoiners need to move as a whole with decent coherency, or else...
we will just "accidentally the whole thing".  Grin

It is the fact that block propagation is slow that makes it costly for a miner to even attempt to produce an 8 MB block.  I estimate that, given the current network propagation impedance, a miner would only be wise to attempt to publish an 8 MB block if it contained 3 to 6 BTC of fees (due to his increased risk of orphaning).  

As block propagation improves, it gets cheaper for miners to build larger blocks.  There is a natural balance that occurs without the need for a tight limit.  This is how Bitcoin has always worked.  The free market solves the block size problem without centralized intervention.  

Emphasis mine.

1) The key word here is "current network". The 1MB limit was baked into the brand by Satoshi himself, that's why miners don't even dare to think consolidating on higher bandwidth lanes, because if they do they will simply go and fork themselves out of the system.  Grin

2) A "natural balance" might begin shifting towards higher and higher bandwidth, but because the process is smooth and slow, it will push home users one by one without them even noticing it. By the time they are almost all out, they will have already lost their favorite toy forever.

3) Bitcoin has "never worked" without there being a single static hard limit (of 1MB) firmly cemented into the brand itself. The challenge we are currently facing is how to melt it carefully, shift it far but safe enough and let it solidify all without the original founder of the project. It's our call. We really need to agree.
legendary
Activity: 1162
Merit: 1007
September 14, 2015, 03:07:31 PM
#31
i think if block propagation can be sped up, 500MB blocks can easily be handled by most, but i think we should see what China can handle and put that as the limit.
Well let me explain this in really simple way. With 1 MB blocks the orphan rates are around 1%. So imagine what happens if the block size was 500 MB. I don't even want to go into this further as it should be obvious. Even with 8 MB blocks today, it would still increase the orphan rates drastically.

There needs to be improvement on block propagation or some sort of way of informing the network that a block was found (would be much faster than the current block propagation time). People really do not understand the orphan rate problem. This is why there was quite some talk about it at the recent conference.

right  block propagation needs to be faster. agreed.

It is the fact that block propagation is slow that makes it costly for a miner to even attempt to produce an 8 MB block.  I estimate that, given the current network propagation impedance, a miner would only be wise to attempt to publish an 8 MB block if it contained 3 to 6 BTC of fees (due to his increased risk of orphaning).  



As block propagation improves, it gets cheaper for miners to build larger blocks.  There is a natural balance that occurs without the need for a tight limit.  This is how Bitcoin has always worked.  The free market solves the block size problem without centralized intervention. 

source: https://scalingbitcoin.org/papers/feemarket.pdf
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 14, 2015, 02:26:53 PM
#30
i think if block propagation can be sped up, 500MB blocks can easily be handled by most, but i think we should see what China can handle and put that as the limit.
Well let me explain this in really simple way. With 1 MB blocks the orphan rates are around 1%. So imagine what happens if the block size was 500 MB. I don't even want to go into this further as it should be obvious. Even with 8 MB blocks today, it would still increase the orphan rates drastically.

There needs to be improvement on block propagation or some sort of way of informing the network that a block was found (would be much faster than the current block propagation time). People really do not understand the orphan rate problem. This is why there was quite some talk about it at the recent conference.

right  block propagation needs to be faster. agreed.
legendary
Activity: 1596
Merit: 1005
★Nitrogensports.eu★
September 14, 2015, 01:50:02 PM
#29
Any Idea on what is the best number? or a dynamic one will do.
please share your Idea. Thanks
So as you can see after reading many post, there is no consensus regarding this matter. That is why we are doomed to fall in the long run.
People want different blockchain limits or don't want changes at all. And I will tell you what is the best answer: all we need to do is to keep bitcoin running without problems, if block limit in the future needs to be adjusted just do it. Don't waste time arguing how to do it.
legendary
Activity: 2674
Merit: 2965
Terminated.
September 14, 2015, 01:47:27 PM
#28
i think if block propagation can be sped up, 500MB blocks can easily be handled by most, but i think we should see what China can handle and put that as the limit.
Well let me explain this in really simple way. With 1 MB blocks the orphan rates are around 1%. So imagine what happens if the block size was 500 MB. I don't even want to go into this further as it should be obvious. Even with 8 MB blocks today, it would still increase the orphan rates drastically.

There needs to be improvement on block propagation or some sort of way of informing the network that a block was found (would be much faster than the current block propagation time). People really do not understand the orphan rate problem. This is why there was quite some talk about it at the recent conference.
legendary
Activity: 3248
Merit: 1070
September 14, 2015, 01:41:25 PM
#27
Dynamic block size limit is the best any day.

Check BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

is this working in this way?

if we reach 2mb traffic the client change automatically the limit to two, if we reach 4mb the client change again to 4mb, etc...?

if this is correct and it working as intended, i think it's the best solution out there

but what is the limit? otherwise some miner could force the system to adapt 3 GB blocks within 1 year or so.

i guess that, by how i see it the limit would be the current maximum block size, so in that example the first limit would be 2mb until it is saturated, then 4mb, then 5 or what you want as a better example, etc...
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
September 14, 2015, 12:29:28 PM
#26
i think if block propagation can be sped up, 500MB blocks can easily be handled by most, but i think we should see what China can handle and put that as the limit.
jr. member
Activity: 42
Merit: 1
September 14, 2015, 11:53:26 AM
#25
I personally think that a blocksize limit is not needed. Simple soft limit set in client config file will be enough. (like 250kB/500kB/750kB soft limits before). You can see from blocksize history that most people just use the default value and the transaction rate increases whenever a new client that ups the default value is released.

Don't forget about the butterfly effect.

The configuration that has proven itself to work is the one where the soft limits (you mentioned) were timely adjusted, while the whole system was protected by a static hard limit. Introducing a minor change to the configuration (hard limit removal) may have a tremendous effect on the outcome. Incentives is the key word here.
newbie
Activity: 42
Merit: 0
September 14, 2015, 11:29:22 AM
#24
I think block sizes should be increased as & when it is required. Maybe 2MB for a while & then when the network dictates it increase it again to 4MB & so on.

I think it's ridiculous to be thinking about 8MB & 20MB blocks now.

This creates a chicken and egg problem. Large projects utilizing blockchain are not made because there is no room and the limit is not raised because blocks are not filled to limit.  There simply needs to be enough headroom and a clear schedule for predictability.

Jeff Garzik puts it very well here: https://www.youtube.com/watch?v=TgjrS-BPWDQ&feature=youtu.be&t=12667


My personal opinion is that a blocksize limit is not needed. Simple soft limit set in client config file would be enough. (like 250kB/500kB/750kB soft limits before). You can see from blocksize history that most people just use the default value and the transaction rate increases whenever a new client that ups the default value is released.

Before this schism most of people on BCT didn't even have a clue that a blocksize limit existed. But given how strong feelings it causes now I'd be happy to go with initial 8 MB limit with a predictable schedule. (BIP 101) I just think anything too close to actual blocksize will stifle and slow down the growth of Bitcoin.
jr. member
Activity: 42
Merit: 1
September 14, 2015, 11:26:50 AM
#23
Dynamic block size limit is the best any day.

Check BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

is this working in this way?

if we reach 2mb traffic the client change automatically the limit to two, if we reach 4mb the client change again to 4mb, etc...?

if this is correct and it working as intended, i think it's the best solution out there

but what is the limit? otherwise some miner could force the system to adapt 3 GB blocks within 1 year or so.
BIP 106 does not dictate any limit. It adjusts the block size max cap according to network demand. To force the cap to 3GB within 1 year and keep it there, one miner has to have majority hash power, with which he can easily do 51% attack. So, it is a most unlikely situation. But, even if it happens, the algo will bring down the cap as soon as the miner is out of majority hash power. It is a demand driven approach, just like difficulty.


I think block sizes should be increased as & when it is required. Maybe 2MB for a while & then when the network dictates it increase it again to 4MB & so on.

I think it's ridiculous to be thinking about 8MB & 20MB blocks now.
That is you invite frequent forks, which is most undesirable in a sustainable system.

You've just triangulated the truth. Congratulations!
With a few more clarifications we will almost get there. Smiley

Adjusting the limit according to network demand is a slippery slope, as miners are profit-driven. If it is economically viable for them to target higher bandwidth with larger blocks yielding more fees, they will definitely go there (in order to break even at least), while the rest of non-mining home-based full nodes (not explicitly incentivized in the current setup) protecting consensus rules with their sheer quantity (or mass) will be left biting the dust.

Over-protecting the network by selecting a smaller limit without a clear roadmap for further scaling, would indeed necessitate frequent debates and might as well melt the idea of the limit altogether. Peers will simply begin adjusting it all on their own, without listening to each other and the whole thing will spiral out of our hands.

Now, you see, that we managed to CounterEntropy Smiley to a single static limit (agreed upon via consensus and firmly cemented in our perception of Bitcoin for at least 4-6 years), which is not too high and not too low (8MB will do) with a gentle yet quick enough schedule to achieve it.
full member
Activity: 214
Merit: 278
September 14, 2015, 10:58:28 AM
#22
Dynamic block size limit is the best any day.

Check BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

is this working in this way?

if we reach 2mb traffic the client change automatically the limit to two, if we reach 4mb the client change again to 4mb, etc...?

if this is correct and it working as intended, i think it's the best solution out there

but what is the limit? otherwise some miner could force the system to adapt 3 GB blocks within 1 year or so.
BIP 106 does not dictate any limit. It adjusts the block size max cap according to network demand. To force the cap to 3GB within 1 year and keep it there, one miner has to have majority hash power, with which he can easily do 51% attack. So, it is a most unlikely situation. But, even if it happens, the algo will bring down the cap as soon as the miner is out of majority hash power. It is a demand driven approach, just like difficulty.


I think block sizes should be increased as & when it is required. Maybe 2MB for a while & then when the network dictates it increase it again to 4MB & so on.

I think it's ridiculous to be thinking about 8MB & 20MB blocks now.
That is you invite frequent forks, which is most undesirable in a sustainable system.
legendary
Activity: 3556
Merit: 9709
#1 VIP Crypto Casino
September 14, 2015, 09:13:06 AM
#21
I think block sizes should be increased as & when it is required. Maybe 2MB for a while & then when the network dictates it increase it again to 4MB & so on.

I think it's ridiculous to be thinking about 8MB & 20MB blocks now.
jr. member
Activity: 42
Merit: 1
September 14, 2015, 08:53:00 AM
#20
the best would be to increase it slowly over time.

an 25-30% increase per years should be fine.

Yes but you never know what size of the blocks you will need. So in a sense, you are playing a guessing game and maybe wasting resources or centralizing Bitcoin too much if you pick a wrong size. You might even choke it if we get a huge increase in transactions and you didn't raise the limit enough.

This dynamic thing that people are mentioning seems very interesting. Is this really not feasible and are there some insurmountable difficulties to cross with this? And what are those? Some more technical people to answer this would be welcome.

First, the idea is to keep it in the KISS category (Keep It Simple Stupid, no offense, it's an official term) as Gavin pointed out.

Second, if we (who?) push the limit and it gives, then what's the point of having it in the first place?

Third, here. Smiley
hero member
Activity: 798
Merit: 1000
Move On !!!!!!
September 14, 2015, 08:06:38 AM
#19
the best would be to increase it slowly over time.

an 25-30% increase per years should be fine.

Yes but you never know what size of the blocks you will need. So in a sense, you are playing a guessing game and maybe wasting resources or centralizing Bitcoin too much if you pick a wrong size. You might even choke it if we get a huge increase in transactions and you didn't raise the limit enough.

This dynamic thing that people are mentioning seems very interesting. Is this really not feasible and are there some insurmountable difficulties to cross with this? And what are those? Some more technical people to answer this would be welcome.
legendary
Activity: 1148
Merit: 1014
In Satoshi I Trust
September 14, 2015, 06:28:41 AM
#18
Dynamic block size limit is the best any day.

Check BIP 106: https://github.com/bitcoin/bips/blob/master/bip-0106.mediawiki

is this working in this way?

if we reach 2mb traffic the client change automatically the limit to two, if we reach 4mb the client change again to 4mb, etc...?

if this is correct and it working as intended, i think it's the best solution out there

but what is the limit? otherwise some miner could force the system to adapt 3 GB blocks within 1 year or so.
Pages:
Jump to: