Author

Topic: 10 minutes block interval and network modelling (Read 203 times)

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
Well, we're definitely not 100% sure for this decision. Satoshi may chose it, because of the calculations he/she had done. Giving that a block can be 1MB max size, it'd mean that every 52,500 blocks (~1 year), the chain would weight up to 52.5GB more than it did a year ago.

This isn't accurate, on early days initial block size limit was 32MB and block time was 10 minutes. But later Satoshi decide to change block size limit to 1MB.

Source: https://en.bitcoin.it/wiki/Scalability_FAQ#What_is_the_short_history_of_the_block_size_limit.3F
legendary
Activity: 2730
Merit: 7065
I seem to remember satoshi justifying the 10 minute interval as roughly the amount of time needed to ensure world-wide propagation of a block and reduce the occurrence of stale blocks.
That's what I remember from old threads as well. I managed to find one such thread where DeathAndTaxes explains it nicely. The 10 minute block time and small block size results in only a tiny percentage or orphaned blocks. If the block time and size was bigger or were to be increased, that would result in more orphaned blocks and less network security.

Any block interval is a compromise.  10 min, 1 min, 60 min, etc.  There is no right or wrong.  It is a compromise.  Remember the actual block time will vary.  When blocks can't propagate the network fast enough and competing blocks are produced that results in orphans.  The % of orphans direct reduces the security of the network.  Currently Bitcoin w/ 10 minute blocks (and relatively small blocks) has about 1% orphan rate.  That means 1% of hashing power is wasted and doesn't improve security.  As blocks get larger the orphan rate will rise (although faster CPU and higher bandwidth connections improve the orphan rate).

10 minutes is a compromise between confirmation times and network security.  Really unless you are accepting 1 confirm txs faster block interval won't make tx validate faster.  If you wait for 6 confirmations on a 10 min block chain then with equivalent hashpower you should wait 24 blocks on a 2.5 min block chain.  If you are willing to accept 4 confirmations on a 2.5 minute blockchain then 1 confirmation on a 10 minute blockchain provides equivalent security.  

Really a shorter block interval only helps if your business accepts 1 confirm txs (because 1 confirm is always more secure than 0 confirms regardless of the block interval).
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Who says a blockchain with a lower block interval would necessarily use the same block size? A comparison might be more meaningful if the average block storage size per hour is kept constant.

If you reduced the block size, why would you change the block interval in the first place?
legendary
Activity: 4522
Merit: 3426
I seem to remember satoshi justifying the 10 minute interval as roughly the amount of time needed to ensure world-wide propagation of a block and reduce the occurrence of stale blocks.

Sorry, I don't have a source, maybe this: https://www.reddit.com/r/Bitcoin/comments/30lxo4/replace_by_fee_a_counter_argument_by_mike_hearn/cptwk21/

He just chose binary-based numbers and rounded them to some decimals.

That's just your speculation and I believe it is not correct.
member
Activity: 189
Merit: 16

Back in 2009, I'm not sure how big was this. In the whitepaper, he/she mentioned about pruning the chain instead of keeping it, though. I guess that he/she did this to reduce the bandwidth between the nodes and to reduce chain's size. If every block was solved every 2 minutes averagely (which is a satisfying interval for the user), the block chain would weight more than 2TB and thus, there'd be fewer nodes.


Who says a blockchain with a lower block interval would necessarily use the same block size? A comparison might be more meaningful if the average block storage size per hour is kept constant.
legendary
Activity: 3038
Merit: 4418
Crypto Swap Exchange
Well, we're definitely not 100% sure for this decision. Satoshi may chose it, because of the calculations he/she had done. Giving that a block can be 1MB max size, it'd mean that every 52,500 blocks (~1 year), the chain would weight up to 52.5GB more than it did a year ago.
The maximum block size of 1MB was only enforced in 2010 onwards. Prior to that, Bitcoin had a different block size limit (32MB?).

1) Do pruned nodes allow incoming connections? If yes, why? They shouldn't be sharing anything, only validating the chain.
Yes. They share the blocks on disk and also relay transactions. Doesn't hurt for them to be accepting incoming connections.
2) What happens to pruned nodes on a 51% attack?
Depends on reorg depth. You cannot reorg beyond the pruned limit and I believe Bitcoin Core will just throw an error in that case.


If Satoshi did the calculations, you bet that there would be the calculations somewhere. Even in 2009, the time it takes for any computer to validate a single block should not come close to 10 minutes at all nor was the internet connection that bad. The way the nodes connects to each other should also result in the propagation to be taking much less than 10 minutes. There is obviously quite a significant room for error in this case and I'm sure that Satoshi just chose it as an arbitrary number.
copper member
Activity: 909
Merit: 2301
He just chose binary-based numbers and rounded them to some decimals.

Block time: 512 seconds (2^9) -> 600 seconds (10 minutes)
Initial reward: 40.96 coins (2^12) -> 50.00 coins
Total supply: 21,474,836.47 coins (2^31-1) -> 21 million coins
Halving: every 262,144 blocks (2^18) -> 210,000 blocks
Difficulty adjustment: every 2048 blocks (2^11) -> 2016 blocks (2 weeks)

He could stick to binary numbers, but humans are more familiar with decimal values, so he chose them to be quite round decimals.

Edit:
Quote
Do pruned nodes allow incoming connections?
It depends on your configuration. If you enable it, then it will be enabled.
Quote
If yes, why?
Because there is nothing that can stop you from forcing it if you really want.
Quote
They shouldn't be sharing anything, only validating the chain.
All nodes store at least some latest blocks, so they can share them. There are limits to pruning, you cannot prune all blocks, unless you force it with some dirty tricks.
Quote
What happens to pruned nodes on a 51% attack?
Every pruned node have to store at least 550 MB, so assuming 1 MB per block it is 550 blocks. Even if blocks would be 4 MB, it would still be more than 100 blocks. Coinbase rewards are locked for 100 blocks and after that it is assumed that they can be safely spent. If that is not the case, then Bitcoin is gone.

Also, you can see what would happen when pruned chain is reorged below pruned point. You can run your regtest, create some heavy blocks and quickly see that such client would simply display an error and would force you to re-download the whole chain from other nodes. If there are no such nodes having all blocks, then Bitcoin is gone.
legendary
Activity: 1512
Merit: 7340
Farewell, Leo
I mean: going on, storage costs seem to become cheaper at a faster rate than computational power and network speed....
Sure, it does. Terabytes back in 2009 were much more expensive than today. He/she may failed to predict the future.

1) it cannot be a general solution, full node are a strict need of the network (btw, if it could be a general solution, needed-storage growth would be even less important effect to take into account while choosing the block time interval)
Did I say to reduce the bandwidth? Sorry, I meant the storage. The bandwidth remains the same whether you're running pruned or not pruned. Yes, full nodes are required for the network in contrast with pruned ones.

Here's some food for thought questions:
1) Do pruned nodes allow incoming connections? If yes, why? They shouldn't be sharing anything, only validating the chain.
2) What happens to pruned nodes on a 51% attack?

2) I believe (correct me if I'm wrong) that it doesn't reduce the IBD, so neither benefit from the point of view of nework speed
No, I didn't mean the network speed. I meant the bandwidth of each node. The process to share each one's blocks. If the interval was 2 minutes, the bandwidth required would be 5x than now.
member
Activity: 90
Merit: 91

Well, we're definitely not 100% sure for this decision. Satoshi may chose it, because of the calculations he/she had done. Giving that a block can be 1MB max size, it'd mean that every 52,500 blocks (~1 year), the chain would weight up to 52.5GB more than it did a year ago.

thanks, I was forgetting to the space constraint... even if I must confess I look at that as a second-order problem... I mean: going on, storage costs seem to become cheaper at a faster rate than computational power and network speed....

Back in 2009, I'm not sure how big was this. In the whitepaper, he/she mentioned about pruning the chain instead of keeping it, though. I guess that he/she did this to reduce the bandwidth between the nodes and to reduce chain's size.

uhmm I'm not sure that reference to "pruning" gets the point: for sure it's a way too save storage space for end-users and maybe other specific class of users, but:
1) it cannot be a general solution, full node are a strict need of the network (btw, if it could be a general solution, needed-storage growth would be even less important effect to take into account while choosing the block time interval)
2) I believe (correct me if I'm wrong) that it doesn't reduce the IBD, so neither benefit from the point of view of nework speed

legendary
Activity: 1512
Merit: 7340
Farewell, Leo
Well, we're definitely not 100% sure for this decision. Satoshi may chose it, because of the calculations he/she had done. Giving that a block can be 1MB max size, it'd mean that every 52,500 blocks (~1 year), the chain would weight up to 52.5GB more than it did a year ago.

Back in 2009, I'm not sure how big was this. In the whitepaper, he/she mentioned about pruning the chain instead of keeping it, though. I guess that he/she did this to reduce the bandwidth between the nodes and to reduce chain's size. If every block was solved every 2 minutes averagely (which is a satisfying interval for the user), the block chain would weight more than 2TB and thus, there'd be fewer nodes.

The number ten was probably a round-assumption and not a well-thought. You can't really pick a round number and believe that it'll be the best case for both sides.
member
Activity: 90
Merit: 91

Hi everybody

as far as I have understood, setting BTC block interval to 10 minutes, is a way to accomplish two result:

1) to give enough time to the decentralized network to converge to a "common" (even if intrinsically yet imperfect)  status
2) fast confirmation, intended as rapidity to "bury" your transaction under a given number of blocks

Infact in Mastering Monero 2nd Edition chapter 10 Antonopoulos writes:
"
Bitcoin’s block interval of 10 minutes is a design compromise between fast confirmation times (settlement of transactions) and the probability of a fork. A faster block time would make transactions clear faster but lead to more frequent blockchain forks, whereas a slower block time would decrease the number of forks but make settlement slower
"

I wonder... why 10 minutes? I mean: maybe Satoshi just tried it and luckily it has worked...but then, maybe with rising academic interest, has a network modelling ever tried to quantitatively check that number?
I'm thinking of a someway stochastic model with "era" parameters: where randomness would be given by network topology/adjacencies structure and computing power distribution curves, and "era" parameters would represent average performances (computational and networking) in a given period of time.
IMHO It would be interesting to check if, given a model, the most effective block interval time is function of just "era" parameters or if any constraints are introduced by structure's statistical shape..

thanks!
Jump to: