Pages:
Author

Topic: Why all blocksize propositions are round numbers ? (Read 3312 times)

full member
Activity: 486
Merit: 104
Basically if we want to process more transactions in Bitcoin there are only two options:

1. Make each transaction smaller
2. Allow more space for transactions (either bigger blocks or more blocks)

And a sort of third:
3. Move some transactions off-chain (which doesn't really process "more" Bitcoin transactions, but does process more in total)

This seems like a fair summary of the issue. However, I still don't agree that letting blocks be generated more often will result in more space taken up on disk at the end of the day. Sure, there will be more coinbase generation transactions, but in every block, there is only one coinbase gen transaction, and all the rest are regular transactions, so I think it would be fair to say that this contribution to disk space consumption is negligible.

Take the number 250,000  daily transactions (It's a reasonable round number: https://blockchain.info/charts/n-transactions)
Whether these transactions are distributed in today's 144 blocks-per-day or (if blocks were generated every 5 minutes instead) 288 blocks-per-day or (if blocks were generated every 2 minutes) 720 blocks-per-day, will not significantly make a difference on disk size consumed. Yes, each block has its own overhead, but the size of each block largely depends on how many transactions it carries.  If they were mostly empty blocks with only coinbase tx's then, yes, it wouldn't make sense to generate more blocks. But, they are far from empty.  By allowing blocks to be generated more often, the day's transactions are spread out more evenly on the time continuum.  The growth of the data will simply be smoother and more continuous rather than in spurts, but it will by and large, be the same amount of data, with the big difference that the absolute limit of transactions which can be processed in the day will have been significantly raised.

The basic problem presents itself because bitcoin is growing in adoption, there are more transactions occurring, and that is in essence, a good thing.

  What we want is to accomodate a growing number of transactions. Information theory tells us there are limits to how tightly transactions can be encoded, so (1.) above will ultimately yield only marginal gains; and yes it should be pursued, but this is not where the solution ultimately lies.  (2.) Is the most obvious choice, but (2.) itself has 2 prongs: size of the blocks (where most of the debate has centered) and frequency of the blocks (which is the option I am suprised is not given more consideration)

Allowing blocks to be generated more often also has the secodary benefit  that transactions are confirmed and processed faster. A payment network's processing capacity is measured in "transactions per second".  In the current system, 600 seconds, on average, go by without any transactions being processed, then a whole batch of them is processed when a block is validated through the proof-of-work system.  To process more transactions, it makes good sense to make better use of the time.
hero member
Activity: 1029
Merit: 712
I wonder why there is no discussion of of shortening the inter-block time (together with a commesurate reduction in block reward in order to keep total emission the same). A halving of the target block time to 5 minutes would double the capacity of the network. Reducing to two minutes would increase the transaction capcacity five-fold.

I think you have answered your own question.  More blocks and transactions means more storage and bandwidth required to run a node.  Overtime, these will increase non linearly and force nodes out, leaving only few central nodes that would cost $$$ to operate.  This is what will kill bitcoin as we know it.

Keeping the growth below this magic threshold of "decentralization point of no return" is the key here.

Why the round numbers?  Because they are guesstimates. People keep throwing the numbers to see which number will be accepted.

Majority is almost always wrong.  Accept it.
 

The problem is that the number of transactions that want to get validated don't find space on a block. All of the proposed solutions are increasing the maximum allowed size of the blocks, or trying to encode more transactions into the already existing block size limit.  There is another resource that is being ignored: time. If for example, inter-block time were reduced by half to 5 minutes, this doesn't mean the blocks will be larger, on the contrary, I understand that it is likely blocks will be smaller, since transactions won't have to wait to be bunched up into the (rarer) every-10 minutes blocks. 
As bitcoin grows in user adoption, it is to some degree inevitable that bandwidth and space requirements will grow. I'm just surprised none of the proposed solutions so far are making use of the rarest of resources, time.

Individual blocks may not be larger, but there will be twice as many of them, so the blockchain will grow (up to) twice as fast, just as it would if the size of each block were doubled.

Basically if we want to process more transactions in Bitcoin there are only two options:

1. Make each transaction smaller
2. Allow more space for transactions (either bigger blocks or more blocks)

And a sort of third:
3. Move some transactions off-chain (which doesn't really process "more" Bitcoin transactions, but does process more in total)
full member
Activity: 486
Merit: 104
I wonder why there is no discussion of of shortening the inter-block time (together with a commesurate reduction in block reward in order to keep total emission the same). A halving of the target block time to 5 minutes would double the capacity of the network. Reducing to two minutes would increase the transaction capcacity five-fold.

I think you have answered your own question.  More blocks and transactions means more storage and bandwidth required to run a node.  Overtime, these will increase non linearly and force nodes out, leaving only few central nodes that would cost $$$ to operate.  This is what will kill bitcoin as we know it.

Keeping the growth below this magic threshold of "decentralization point of no return" is the key here.

Why the round numbers?  Because they are guesstimates. People keep throwing the numbers to see which number will be accepted.

Majority is almost always wrong.  Accept it.
 

The problem is that the number of transactions that want to get validated don't find space on a block. All of the proposed solutions are increasing the maximum allowed size of the blocks, or trying to encode more transactions into the already existing block size limit.  There is another resource that is being ignored: time. If for example, inter-block time were reduced by half to 5 minutes, this doesn't mean the blocks will be larger, on the contrary, I understand that it is likely blocks will be smaller, since transactions won't have to wait to be bunched up into the (rarer) every-10 minutes blocks. 
As bitcoin grows in user adoption, it is to some degree inevitable that bandwidth and space requirements will grow. I'm just surprised none of the proposed solutions so far are making use of the rarest of resources, time.
legendary
Activity: 2702
Merit: 1468
I wonder why there is no discussion of of shortening the inter-block time (together with a commesurate reduction in block reward in order to keep total emission the same). A halving of the target block time to 5 minutes would double the capacity of the network. Reducing to two minutes would increase the transaction capcacity five-fold.

I think you have answered your own question.  More blocks and transactions means more storage and bandwidth required to run a node.  Overtime, these will increase non linearly and force nodes out, leaving only few central nodes that would cost $$$ to operate.  This is what will kill bitcoin as we know it.

Keeping the growth below this magic threshold of "decentralization point of no return" is the key here.

Why the round numbers?  Because they are guesstimates. People keep throwing the numbers to see which number will be accepted.

Majority is almost always wrong.  Accept it.
 
full member
Activity: 486
Merit: 104
I find it misleading though to say that they would both be "secessionist", like they are both the same type of change, which they are not. A soft fork involves transactions that are already valid under the current consensus rules, but requiring a more strict subset, whereas a hard fork requires miners to accept blocks that were previously against consensus rules.

Classic has arbitrarily chosen the number 2 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

Core has arbitrarily chosen the number 1 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

As far as I've seen, both sides have nothing to back up their number other than "It feels right to me."  Has anyone done any analysis, testing, calculations, etc to determine what the actual best number is?  Maybe it should be 253358 bytes?  Maybe it should be 9860943 bytes? Maybe it should be 2 MB AND Segregated Witness?

Far too many people seem to think that we should just pick an arbitrary answer that doesn't sound too big or too small based on our own personal relationship with big numbers. I'm happy to go with 1 MB and Segregated Witness if someone can show me why that is the "right" answer.  I'm happy to go with 2 MB or 8 MB or 1GB, or 3.141592653 MB if someone can show me why that is the "right" answer.  Furthermore, if someone can back up their idea with proof that it is the "right" answer today, then they're going to need to explain how their solution will be able to adapt to the possibility that today's "right" answer might not be next year's (or 5 years from now, or 10 years from now) "right" answer.  We really shouldn't have to go through this whole debate over, and over, and over every few years.

A soft fork (or no fork) isn't necessarily better.  I'd much prefer a hard fork if I knew that the result would be the "right" answer than a soft fork to something that would be bad in the long run. Just because it is easier to implement or less painful in the short run doesn't mean that it's the answer we should support.  That's a bit like saying I prefer not to get surgery to remove the tumor because its cheaper, easier, and less disruptive right now to just let the cancer continue to spread.  Maybe later if the cancer becomes too painful or disruptive on its own I'll reconsider removal.  (Of course by then it might be too late to do anything about it).

I wonder why there is no discussion of of shortening the inter-block time (together with a commesurate reduction in block reward in order to keep total emission the same). A halving of the target block time to 5 minutes would double the capacity of the network. Reducing to two minutes would increase the transaction capcacity five-fold.
legendary
Activity: 1120
Merit: 1004
What is the issue with it not catching up? Is it the amount of data required?

To be honest I am not really sure - initially I just assumed it was due to living in China (as accessing a lot of internet can be slow here especially if it involves connecting to nodes that are outside of China) but when I was back in Australia for a month it was no better (it never managed to catch up the one month it was behind and I let it run for at least ten hours per day).

Perhaps the laptop I'm using is simply too slow to handle all the ECDSA signature verification in a timely fashion - when the next version is released (which will have the much faster ECDSA stuff that has been developed by Core) I will upgrade and see how it goes.

Obviously if the block sizes get much bigger I'm not even going to bother trying to run a full version of Bitcoin (and I suspect there are probably others who are facing the same situation).


The problem of increasing the blocksize is that many people will think like you, and will shutdown their nodes. If after the blocksize increase, this scenario effectively happen, I'll do what I can to help the network, even if it is modest, by running 2 or 3 nodes.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
What is the issue with it not catching up? Is it the amount of data required?

To be honest I am not really sure - initially I just assumed it was due to living in China (as accessing a lot of internet can be slow here especially if it involves connecting to nodes that are outside of China) but when I was back in Australia for a month it was no better (it never managed to catch up the one month it was behind and I let it run for at least ten hours per day).

Perhaps the laptop I'm using is simply too slow to handle all the ECDSA signature verification in a timely fashion - when the next version is released (which will have the much faster ECDSA stuff that has been developed by Core) I will upgrade and see how it goes.

Obviously if the block sizes get much bigger I'm not even going to bother trying to run a full version of Bitcoin (and I suspect there are probably others who are facing the same situation).
full member
Activity: 182
Merit: 107
Personally I am surprised at @DannyHamilton's post myself.

So I guess I will just go on record to say that I support Bitcoin Core (although I can't even run it these days because it never catches up).


What is the issue with it not catching up? Is it the amount of data required?

That's the reason I prefer core. Of course the blocksize for core will have to be increased at some point, but by using solutions that increase transaction without increasing the block size, core can be run by more people in the world, many of which do not have stable access to broadband Internet.

The bigger the blocks, the more people will not be able to run a full-node client.

Thus it seems to me that solutions that increase transactions without increasing block size really should be implemented before a block size increase.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Personally I am surprised at @DannyHamilton's post myself.

So I guess I will just go on record to say that I support Bitcoin Core (although I can't even run it these days because it never catches up).
legendary
Activity: 1120
Merit: 1004
Why should anyone care what I support.

It matters (at least for me) because you're one of the members that I admire most. You always have a reply to my questions, and time to answer them, even if I'm in your ignore list. Definitly, it matters. You probably chose Classic for a precise reason, so maybe you'll be able to convince me Smiley.

My biggest problem with Classic is the way they did their secession, by an hard-fork and backed by big companies.

Read the rest of the post.

-snip-
I don't prefer either option.  The point is that neither (or both) can really be called "secessionist".  They are BOTH changing the current (0.11.2) behavior.  Therefore either BOTH are "secessionist" or neither are.  Choosing a derogatory name for someone you disagree with might be a great way to influence those that don't take the time to think for themselves, but it's a pretty poor way to demonstrate the strength of the position you support. It also erodes respect from those that actually take the time to understand and make intelligent decisions.

This is about the usage of the term "secesionist". He says that he don't prefer on of them, but he's on Classic. That's mandatory that there is a reason.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer

Nice find (wasn't aware of that myself until now).

OP - perhaps we should be using a number based upon this: https://en.wikipedia.org/wiki/Tonal_system Wink

(it would make Luke-Jr very happy)
sr. member
Activity: 433
Merit: 267
I find it misleading though to say that they would both be "secessionist", like they are both the same type of change, which they are not. A soft fork involves transactions that are already valid under the current consensus rules, but requiring a more strict subset, whereas a hard fork requires miners to accept blocks that were previously against consensus rules.
As far as I've seen, both sides have nothing to back up their number other than "It feels right to me."  Has anyone done any analysis, testing, calculations, etc to determine what the actual best number is?
It's not possible to do that calculation without doing at least two big assumptions.

For example if we assume totally filled blocks;

A = Transaction Fee
B = Transactions Per Block
C = Target Fee's per Block

We want
A*B => C

"A" is arrived at by market bidding which is dependent on the maximum transactions per block "B".
"C" is whatever funds the mining to some minimum amount that ensures security, which is nebulous and tied into the desire for node redundancy.
One can make an educated guess at two of the three to get a good handle on the third, but at the end of the day that's what they are; Educated guesses.
It would be nice if everyone's guesses would fall in line, but as it turns out it runs the gamut all the way from the status-quo to no block limits at all.

The point I'm trying to make is they didn't pick 1MB, 2MB, or 20MB out of some lack of desire to find some numbers that rise mechanically out of some well-reasoned formula. There's no way to make such a formula without littering it with dubious assumptions along the way. Though maybe there would be some benefit to arguing the slightly more granular points like; What kind of network topology is decentralized enough? What price would people be willing to pay for it, and would the bidding process actually achieve that rate? What block size can be expected to achieve the targeted node and hashing redundancy? (As you can see, they're very interrelated questions.)

There are three ways that "Bitcoin" is trying get out of that rut; Try to develop technology that can accommodate more transactions without increasing block sizes (eg. Lightning Network.), have many alts compete among different use cases, and... Well hardfork a Bitcoin to align it with one's own ideology, and that brings us full circle;

At the risk of beating a dead horse, hard-forking even if it's a "better" solution to some perceived goal, is fundamentally hijacking and changing the rules to align with one's own ideology. It's not just a matter of looking at the options and picking the one that's technically better, regardless of hard-fork, soft-fork, or no fork. There really is a difference between the types of changes, which is why nearly all of the Bitcoin Core developers are very reluctant to go down the path of a hard-fork.

For completeness I probably should argue why resisting political attacks, and populist hard-forks, is important, but I'm not sure how useful that would be, or how useful this post has been so far, so I'll stop ranting.
copper member
Activity: 1498
Merit: 1528
No I dont escrow anymore.
Why should anyone care what I support.

It matters (at least for me) because you're one of the members that I admire most. You always have a reply to my questions, and time to answer them, even if I'm in your ignore list. Definitly, it matters. You probably chose Classic for a precise reason, so maybe you'll be able to convince me Smiley.

My biggest problem with Classic is the way they did their secession, by an hard-fork and backed by big companies.

Read the rest of the post.

-snip-
I don't prefer either option.  The point is that neither (or both) can really be called "secessionist".  They are BOTH changing the current (0.11.2) behavior.  Therefore either BOTH are "secessionist" or neither are.  Choosing a derogatory name for someone you disagree with might be a great way to influence those that don't take the time to think for themselves, but it's a pretty poor way to demonstrate the strength of the position you support. It also erodes respect from those that actually take the time to understand and make intelligent decisions.
legendary
Activity: 1120
Merit: 1004
Why should anyone care what I support.

It matters (at least for me) because you're one of the members that I admire most. You always have a reply to my questions, and time to answer them, even if I'm in your ignore list. Definitly, it matters. You probably chose Classic for a precise reason, so maybe you'll be able to convince me Smiley.

My biggest problem with Classic is the way they did their secession, by an hard-fork and backed by big companies.
legendary
Activity: 996
Merit: 1013

Classic has arbitrarily chosen the number 2 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

Core has arbitrarily chosen the number 1 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

As far as I've seen, both sides have nothing to back up their number other than "It feels right to me."  Has anyone done any analysis, testing, calculations, etc to determine what the actual best number is?  Maybe it should be 253358 bytes?  Maybe it should be 9860943 bytes? Maybe it should be 2 MB AND Segregated Witness?

The question that most acutely touches on the numerical value of the block size limit is
what is the effect of block size on network topology, that is, if we raise block size limit,
does it affect noticeably the ability to run full nodes on low-end machines (and so reduce
decentralization).

Of course, to measure that ability, one has to take into account multiple factors like
bandwidth, memory and the size of current UTXO set, and most importantly, the
varying structure of transactions inside a block, and the time to validate them.

We need a metric, and user-configurable policies to translate these factors into individual block size
limit settings that are communicated to peers, and then we have a possibility of determining what
sized blocks the majority of full nodes are able to process. Hard coded limit is an inexpensive,
but ultimately unsustainable solution.



copper member
Activity: 2996
Merit: 2374
I find it misleading though to say that they would both be "secessionist", like they are both the same type of change, which they are not. A soft fork involves transactions that are already valid under the current consensus rules, but requiring a more strict subset, whereas a hard fork requires miners to accept blocks that were previously against consensus rules.

Classic has arbitrarily chosen the number 2 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

Core has arbitrarily chosen the number 1 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

As far as I've seen, both sides have nothing to back up their number other than "It feels right to me."  Has anyone done any analysis, testing, calculations, etc to determine what the actual best number is?  Maybe it should be 253358 bytes?  Maybe it should be 9860943 bytes? Maybe it should be 2 MB AND Segregated Witness?
There is also no rule that says we can only have one hardfork (or any other change) that results in the maximum block size being increased. There is no reason why we cannot hardfork Bitcoin so that the maximum block size increases to 1.5 MB tomorrow and then hardfork Bitcoin a month later so that the maximum block size increases again to 3.75 MB (although it would be highly unlikely that consensus would be achieved in that time, and it would be unlikely that code would be ready in that time, although the concept holds true).

I think a better set of questions to ask is 'is the status quo sustainable, and if not then does the maximum block size need to be increased in the medium term'. I believe the answer to the first question will almost always be 'no' (at least as long as I believe that adoption of bitcoin will continue to grow), and I believe that the answer to the second question is currently 'yes'. The average block size has been well above 700kb for several months now, with backlogs sometimes lasting several days. I believe that this is sufficient evidence that supports that the maximum block size needs to be increased.

I do not disagree that it is difficult to determine just how much the maximum block size should be increased in the immediate term. However if there is not evidence that an increase in the maximum block size would be harmful over the both short and long term, then I do not believe that such an increase should be opposed. None of the serious proposals of the raising of the maximum block size would be harmful to Bitcoin over either the short nor long term (IMO), and I therefore have personally supported each of them.
legendary
Activity: 3472
Merit: 4801
I find it misleading though to say that they would both be "secessionist", like they are both the same type of change, which they are not. A soft fork involves transactions that are already valid under the current consensus rules, but requiring a more strict subset, whereas a hard fork requires miners to accept blocks that were previously against consensus rules.

Classic has arbitrarily chosen the number 2 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

Core has arbitrarily chosen the number 1 MB as the "right" maximum block size for now, and would force everyone into their decision if they get enough support.

As far as I've seen, both sides have nothing to back up their number other than "It feels right to me."  Has anyone done any analysis, testing, calculations, etc to determine what the actual best number is?  Maybe it should be 253358 bytes?  Maybe it should be 9860943 bytes? Maybe it should be 2 MB AND Segregated Witness?

Far too many people seem to think that we should just pick an arbitrary answer that doesn't sound too big or too small based on our own personal relationship with big numbers. I'm happy to go with 1 MB and Segregated Witness if someone can show me why that is the "right" answer.  I'm happy to go with 2 MB or 8 MB or 1GB, or 3.141592653 MB if someone can show me why that is the "right" answer.  Furthermore, if someone can back up their idea with proof that it is the "right" answer today, then they're going to need to explain how their solution will be able to adapt to the possibility that today's "right" answer might not be next year's (or 5 years from now, or 10 years from now) "right" answer.  We really shouldn't have to go through this whole debate over, and over, and over every few years.

A soft fork (or no fork) isn't necessarily better.  I'd much prefer a hard fork if I knew that the result would be the "right" answer than a soft fork to something that would be bad in the long run. Just because it is easier to implement or less painful in the short run doesn't mean that it's the answer we should support.  That's a bit like saying I prefer not to get surgery to remove the tumor because its cheaper, easier, and less disruptive right now to just let the cancer continue to spread.  Maybe later if the cancer becomes too painful or disruptive on its own I'll reconsider removal.  (Of course by then it might be too late to do anything about it).
legendary
Activity: 996
Merit: 1013

Choosing a derogatory name for someone you disagree with might be a great way to influence those that don't take the time to think for themselves, but it's a pretty poor way to demonstrate the strength of the position you support. It also erodes respect from those that actually take the time to understand and make intelligent decisions.

Very well said..

and back on the topic... in Bitcoin Unlimited you specify your max block settings
as bytes.In the upcoming release it will (very probably) show up in the sub-version string
expressed as megabytes rounded to one decimal.
sr. member
Activity: 433
Merit: 267
I'm genuinely surprised that you support Bitcoin Classic, that should give anybody pause.
Why should anyone care what I support.  People should learn what their options are, and what the benefits and drawbacks of each option is. Then make the choice that they support.
Because you are well informed and have a history of responding to questions based on technical facts and not on emotional or ideological appeal. That makes me pause for a second and want to know how you reached that conclusion.

Why do you prefer a simple 2MB hardfork over the Segregated Witness softfork?
I don't prefer either option.
I misunderstood.

The point is that neither (or both) can really be called "secessionist".  They are BOTH changing the current (0.11.2) behavior.  Therefore either BOTH are "secessionist" or neither are.  Choosing a derogatory name for someone you disagree with might be a great way to influence those that don't take the time to think for themselves, but it's a pretty poor way to demonstrate the strength of the position you support. It also erodes respect from those that actually take the time to understand and make intelligent decisions.
You're right about the derogatory term, and I don't think it was well chosen for that effect anyway. What's so bad about secession?
I find it misleading though to say that they would both be "secessionist", like they are both the same type of change, which they are not. A soft fork involves transactions that are already valid under the current consensus rules, but requiring a more strict subset, whereas a hard fork requires miners to accept blocks that were previously against consensus rules.
legendary
Activity: 3472
Merit: 4801
I'm genuinely surprised that you support Bitcoin Classic, that should give anybody pause.

Why should anyone care what I support.  People should learn what their options are, and what the benefits and drawbacks of each option is. Then make the choice that they support.

I thought you were joking earlier.

Not so much joking as making a point.

Why do you prefer a simple 2MB hardfork over the Segregated Witness softfork?

I don't prefer either option.  The point is that neither (or both) can really be called "secessionist".  They are BOTH changing the current (0.11.2) behavior.  Therefore either BOTH are "secessionist" or neither are.  Choosing a derogatory name for someone you disagree with might be a great way to influence those that don't take the time to think for themselves, but it's a pretty poor way to demonstrate the strength of the position you support. It also erodes respect from those that actually take the time to understand and make intelligent decisions.
Pages:
Jump to: