Pages:
Author

Topic: Bitcoin Block Size Conflict Ends With Latest Update (Read 3420 times)

legendary
Activity: 2674
Merit: 2965
Terminated.
I think we should increase the size of blocks slowly, double every 3 or 4 years. This will increase the fees paid by user to maintain the network and encourage the development of sidechain, so that main chain grows slowly.
I guess the problem here is that once it's applied and being used it would take another fork to change the rules. I'm not sure if they could code a changing block size, however I think that might be prone to abuse(?).
What evidence do you have to support a doubling every 3 or 4 years? How do you know that that is the right call?
full member
Activity: 150
Merit: 100
Hold on Nancy, i think we've got a broad consensus forming in the community to exclude Gavin and Mike from the devteam ... so what's that shit you've been talking about again?
sr. member
Activity: 434
Merit: 250
I think we should increase the size of blocks slowly, double every 3 or 4 years. This will increase the fees paid by user to maintain the network and encourage the development of sidechain, so that main chain grows slowly.
legendary
Activity: 2674
Merit: 1083
Legendary Escrow Service - Tip Jar in Profile
Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.

The point is that Bitcoin does not scale well to process all transactions natively. It's simply a waste of resources to process all microtransactions on the blockchain. Microtransactions < $ 1 simply do not need the same level of security as bigger transactions. They should move off-chain/side-chain/second-layer.

The max_blocksize should be increased conservatively to ensure that decentralization is not hurt, because decentralization gives Bitcoin value. Solutions for microtransactions are being developed right now.

Hearn's and Gavin's plan is simply not well thought out and outright dangerous for the future of Bitcoin as a decentralized currency. Relying on Moore's / Nielson's "law" is simply linear extrapolation of past trends (based on a very limited timespan), without any evidence that these past trends can be sustained in future. There are natural boundaries for e.g. further miniaturization, so betting on these trends for another 20 years is unwise.

ya.ya.yo!

Even though i dont like the way gaving and hearn handling things with something like an extortion, the solution is most probably most near to what will happen in the future. Moores law is the most exact thing we can guess that it will develop.

Saying that... if its another way at the end then nothing prevents us from changing the protocol again and adjusting to the actual needs then.
hero member
Activity: 651
Merit: 518
read below :

An issue that has been the source for months of debate and rancor throughout the Bitcoin mining and developer community over Bitcoin's block size appears to have reached a long-awaited resolution. Within the most recent BitcoinXT update, Gavin Andresen has begun the process of revising the block chain individual block size from 1 MB to 8 MB starting next year. This is deemed necessary for the overall growth and usability of Bitcoin, as the current limits of seven transactions per second are becoming insufficient for the growing global community as consumer and business interest increases.

These impending updates were revealed on GitHub, and this is what is in store for the upcoming “hard fork”, taken directly from GitHub, posted by Gavin Andresen:

Implement hard fork to allow bigger blocks. Unit test and code for a bigger block hard fork.

Parameters are:

    8MB cap
    Doubling every two years (so 16MB in 2018)
    For twenty years
    Earliest possible chain fork: 11 Jan 2016
    After miner supermajority (code in the next patch)
    And grace period once miner supermajority achieved (code in next patch)

The 1 MB block size debate has been a constant issue for months, with Andresen and Mike Hearn discussing the need to upgrade the block size to as much as 20 MB. China's major exchanges and mining interests came out against any block size changes initially, deriding the extra operating costs and complexities involved with mostly empty blocks. After further review, the increase was deemed warranted to an 8 MB size, much smaller than the 20 MB requests by the Core Developers. An accord was reached, and the revisions will take effect next year.

We attempted to contact Hearn and Andresen for more information and will provide updated information as it becomes available. It seems some details are still to be sorted out in the next coding batch within the coming days. We’ll keep our readers informed of any further developments.

What do you think of these new core updates and the automatic changes every two years? Share above and comment below.

Source : https://www.cryptocoinsnews.com/bitcoin-block-size-conflict-ends-latest-update/

Bitcoin XT is a fork of Bitcoin that almost no one uses, why care what Gavin and Mike are doing with it?
legendary
Activity: 994
Merit: 1035
At first I thought you were being alarmist, but I came to the conclusion that you are absolutely correct once I realized as you did that:

I'm picking up on the sarcasm so let me address your comments:

- Increasing the block limit to X MB guarantees that every block will be exactly X MB

Agreed. Just like now where most blocks are 20-30% full we cannot assume that transaction volume will automatically fill the available block limit. Like Gavin and Hearn, I think it is prudent to prepare for this possibility beforehand whether it is caused by an attacker "testing" the network or wide scale adoption. Thus one should test for and understand the tradeoffs with all the available block being used to prepare for it.

- Bandwidth is a limiting factor right now, since 1MB barely makes it over a 14.4k baud modem in 10 minutes, and that's the best we now have

Bandwidth is fine right now, but there are other considerations to consider like miners concern of propagation time (some are already purposely limiting or not including transactions for a slight edge), running a node over TOR, and I have shown that historically network bandwidth hasn't scaled at the same rate of what is being proposed.

- Miners have absolutely no control over block size whatsoever, and no method (such as transaction fee policies) to decide which transactions to include

Miners ultimately decide what the limit to set will be as they can choose to include transactions or not even if developer increase the block limit. My concern with this is the same concern I have with Garziks proposal. The fact that 5 Chinese companies control 60% of the hashrate and can set the limit with or without any developers agreeing concerns me. I would be much happier giving miners more control if the hashrate was more distributed but we are a long ways off from that at the moment.  

- This proposed code change is permanent and can never ever be revised based on future facts

Sure , we can always switch it back with another hard fork... but why not come to consensus on a proposal that is best for the community and properly tested to avoid the negative PR and a few more years of work debating and testing.

Gavin Just submitted the BIP a couple days ago.... so its still early on the testing . We should probably test his suggestion on a testnet  under various scenarios before we agree or disagree with it. Initially, I don't think it is a wise proposal, but I am certainly open to my mind being changed with the right evidence.
hero member
Activity: 493
Merit: 500
   8MB cap
   Doubling every two years (so 16MB in 2018)
    For twenty years


It is one thing to temporarily increase the size to give a bit more breathing room for sidechains , payment channels and the lightning network to be tested and another thing to simply double the limit every two years to remove the pressure of discovering more efficient means of scaling.

This is a horrible plan and one that I cannot support. (coming from someone that has defended Gavin and Hearn in the past

At first I thought you were being alarmist, but I came to the conclusion that you are absolutely correct once I realized as you did that:

- Increasing the block limit to X MB guarantees that every block will be exactly X MB
- Bandwidth is a limiting factor right now, since 1MB barely makes it over a 14.4k baud modem in 10 minutes, and that's the best we now have
- Miners have absolutely no control over block size whatsoever, and no method (such as transaction fee policies) to decide which transactions to include
- This proposed code change is permanent and can never ever be revised based on future facts
legendary
Activity: 1806
Merit: 1024
Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.

The point is that Bitcoin does not scale well to process all transactions natively. It's simply a waste of resources to process all microtransactions on the blockchain. Microtransactions < $ 1 simply do not need the same level of security as bigger transactions. They should move off-chain/side-chain/second-layer.

The max_blocksize should be increased conservatively to ensure that decentralization is not hurt, because decentralization gives Bitcoin value. Solutions for microtransactions are being developed right now.

Hearn's and Gavin's plan is simply not well thought out and outright dangerous for the future of Bitcoin as a decentralized currency. Relying on Moore's / Nielson's "law" is simply linear extrapolation of past trends (based on a very limited timespan), without any evidence that these past trends can be sustained in future. There are natural boundaries for e.g. further miniaturization, so betting on these trends for another 20 years is unwise.

ya.ya.yo!
hero member
Activity: 593
Merit: 500
1NoBanksLuJPXf8Sc831fPqjrRpkQPKkEA
Well, let's see the sequel, it seems need more information.

.......

Parameters are:

    8MB cap
    Doubling every two years (so 16MB in 2018)
    For twenty years
   ....

Does that mean in 20 years after it starts doubling the blocks will be 8192 MB, which is 8.192 GB per block?

Have I done my maths right?

8 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 = 8192

To keep it simple I'm not considering the difference between mebibytes and megabytes.

What? that's too much right Undecided He told doubling every 2 years, so every 2 years increased 8MB. The parameters will implement next years(2016) and then in 2018 increase 8MB, so 8+8=16MB.  
You means 20 years after starts. Start from 2016 and the end is 2036.

8MB(2016) + 8MB = 16MB(2018) + 8MB = 24MB(2020) + 8MB = 32MB(2022) + 8MB = 40MB(2024) + 8MB = 48MB(2026) + 8MB = 56MB(2028) + 8MB = 64MB(2030) + 8MB = 72MB(2032) + 8MB = 80MB(2034) +8MB = 88MB in 2036.
CMIIW.


~iki

No, the 8102MB are correct. You only summarized 8MB each 2 years, you need to double the max blocksize of the previous year.

Though 8.1GB per block, so each minute, sound extreme. Ok, 20 years in the future is a long way but still. I wonder how harddiscspace will develop till then. Or whatever we will use to write data on then.  Roll Eyes
legendary
Activity: 994
Merit: 1035
Looks like those bandwidth test averages  are even more optimistic than we both assumed as they mainly deal with burst speeds, have some flaws, and don't consider packet limiting and restrictions done by ISPs.


I found some excellent data.  Ookla has been empirically measuring upload and download speeds for over a decade and from all over the world, based on its speed test results:

http://explorer.netindex.com/maps

The pricing information is lacking, however.  
The data are far from "excellent". They are mostly bullshit numbers measured using short-term bursts. I checked several markets that I'm very familiar with and those all were successfully gamed by the DOCSIS cable providers using the equipment with "powerboost" (or similar marketing names).

"Powerboost" is a ultra-short-term (few to few-teen or sometimes few-ty seconds) bandwidth increase made available to the modems of the customers that haven't maxed out their bandwidth in previous minutes.

The configuration details are highly proprietary and vary by market and by time of day&week. But the overall effect is that that the DOCSIS modem seriously approaches 100Mbps LAN performance for a few packets in bursts.

On some markets that I know the VDSL2 competitors (that optimize average bandwidth over periods of weeks) didn't even rank on the "TOP ISPS". In reality of the non-burst-y loads the VDSL2 providers outperform DOCSIS providers, especially on the upload side as the VDSL2 technology is a fundamentally symmetric technology that's being sold as asymmetric only for market-segmenting reasons.

I'm not even going to delve into further restrictions on consumer broadband where the providers explicitly limit number of packet flows that can be handled by the customer's equipment. OOKLA (and almost everyone else) measures a 2-flow single-tcp connections, which have really nothing in common with peer-2-peer technologies like Bitcoin or Bittorrent.

Executive summary:

Bullshit marketing numbers, divide by 3-5-10 to get real number achievable with P2P technologies and continuous operation.

------------------------------------------------

The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

This is better, but I still don't like the idea of creating a framework where we continuously "kick the can down the road" and what is does to de-incentivize better solutions. I do agree with Gavin and Hearn that we should increase the limit, and it is better to do so before we start hitting the blocklimit continuously, for the sake of temporarily buying some more time to implement and test other solutions and do empathize with them as to why they want to incorporate code to scale automatically so they can avoid this multi-year hard fork debate in the future... but this "crisis" is actually a good thing because it is forcing us to think of creative solutions and test more.

 
Q7
sr. member
Activity: 448
Merit: 250
Finally. But it caught me by surprise when they announced that the block size will double every two year which I think need to reconsider whether it would be practical in the long run. It's not the computer spec that we are talking about over here because for me, processing power can scale up accordingly but what is more worrying is the internet bandwidth usage. Where I come from, it will mean paying a ton for the bills.
legendary
Activity: 2674
Merit: 2965
Terminated.
Start with 2MB next year, double the block size every 2 years
I hardly doubt that that would be enough. Even though the number would only be 4 times less, and in 20 years we would have 2GB blocks.
We would only have 6 (in practice; theoretical is 14) transactions per second until 2018. I think that we might need more.

This is not a thread about discussing Moore's law, or other similar ones. Let us not drift away too much.
sr. member
Activity: 434
Merit: 250
The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.

Start with 2MB next year, double the block size every 2 years
legendary
Activity: 1792
Merit: 1047
Moore's law is not a law but a trend. The bitcoin foundation already went almost bankrupt betting on a trend.

We can increase the size when the trend materialises, not before.

btw I am pretty sure my internet speed in Australia did not increase x8 in last 6 years.

Decentralisation is Bitcoins unique killer feature and I believe it should be the top priority, even if it results in slower user growth short term.
 

Moore's law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.
The capabilities of many digital electronic devices are strongly linked to Moore's law: quality-adjusted microprocessor, memory capacity, this trend has continued for more than half a century, "Moore's law" should be considered an observation or projection and not a physical or natural law.

Moore's law describes a driving force of technological and social change, productivity, and economic growth.
Gordon Moore in 2015 foresaw that the rate of progress would reach saturation: "I see Moore’s law dying here in the next decade or so."

However, The Economist news-magazine has opined that predictions that Moore's law will soon fail are almost as old, going back years and years, as the law itself, with the time of eventual end of the technological trend being uncertain.

Hard disk drive areal density
The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.

Gordon E. Moore, co-founder of the Intel Corporation and Fairchild Semiconductor, In 1975, he revised the forecast doubling time to two years.
The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and their being faster)

How long will it take for all the 21 Million BTC subsidy to be unlocked? Perhaps it was too fine tuned with Moore's law.

Physical data storage Kryder's law and with new solid state memory technology may have accelerated this area of development into Moore's law territory.

Network bandwidth tuned to Nielsen's Law.
legendary
Activity: 2506
Merit: 1030
Twitter @realmicroguy
The correct answer is to go with Gavin's idea except double the block size every 4 years instead of 2.

We can fork it again later, but with the more conservative approach, you can give some love to the Chinese and keep them happy.
legendary
Activity: 2632
Merit: 1023
The real answer is not to just invest in BTC, but have some LTC, Peercoin, NXT, even Doge etc etc.

If BTC does fail LTC or others will take its place and not make the same mistakes.
sr. member
Activity: 277
Merit: 257
Moore's law is not a law but a trend. The bitcoin foundation already went almost bankrupt betting on a trend.

We can increase the size when the trend materialises, not before.

btw I am pretty sure my internet speed in Australia did not increase x8 in last 6 years.

Decentralisation is Bitcoins unique killer feature and I believe it should be the top priority, even if it results in slower user growth short term.
 
legendary
Activity: 994
Merit: 1035
You have to take into account that these measurements are download speeds. However what's the real bottleneck are upload speeds, which are far below that.

Also these measurements are short-term and for direct ISP-connectivity only (this is on what Nielson's observations are based on). These measurements and growth rates are in no way applicable to a decentralized multi-node network. In addition most ISPs have explicit or implicit data transfer limitations - they will throttle down your connection if you exceed a certain transfer volume.

So I fully agree with your assessment that Hearn's and Gavin's plan is horrible - in fact, it's even more horrible than you've shown.
There's no way I will ever support such a fork.

ya.ya.yo!

Correct ...I added 2 more items while you were typing. Except, keep in mind those speed are the total combined upload and download speeds so 31.94 Mbps average means 21.88MBps download and 9.86MBps upload in 12/2014 thus the situation is worse than what you are even suggesting.

When you look at many other countries the growth is much slower as well. I was deliberately taking one of the better case scenarios to be generous.


all these numbers I am citing are only considering Broadband as well, and not mobile which is much slower and with much lower soft caps.
All these users are sharing these node demands with other data , much of it streaming video which is very demanding. Their proposal will essentially make hosting a full node at home a thing of the past in time.
 
legendary
Activity: 1806
Merit: 1024
History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

The link you provided cites advertised plans boasting hypothetical peak bandwidth possibilities and not real life bandwidth averages.

Additionally, this doesn't address all the concerns:

1) Latency caused by larger blocks incentivizes the centralization of mining pools
2) Not everyone worldwide lives in locations which has bandwidth growing at the same rates
3) Advertised bandwidth rates are not the same as real world bandwidth rates
4) ISPs often put soft caps on total bandwidth used on accounts and stunting the user speed to a crawl.  More are no longer advertising unlimited bandwidth per month and setting clear total amount transferable limits and hardcaps with overage charges.
5) Full nodes at home need to compete with the bandwidth needs of HD video streaming used by most users which is getting increasingly demanding. Most people don't want to expend most of their bandwidth on supporting a full node and stop streaming Netflix and or torrenting.
6) Supporting nodes over TOR is a concern

Lets look at the historical account of real world bandwidth averages -

http://explorer.netindex.com/maps?country=United%20States

1/2008      5.86 Mbps
12/2008    7.05 Mbps
12/2009    9.42  Mbps
12/2010    10.03  Mbps
12/2011    12.36   Mbps
12/2012    15.4   Mbps
12/2013    20.62   Mbps
12/2014    31.94  Mbps

Thus you can see that even if I were to ignore many parts of the world where internet isn't scaling as fast and focus on the "first world", bandwidth speeds aren't scaling up as quickly as you suggest.

You have to take into account that these measurements are download speeds. However what's the real bottleneck are upload speeds, which are far below that.

Also these measurements are short-term and for direct ISP-connectivity only (this is on what Nielson's observations are based on). These measurements and growth rates are in no way applicable to a decentralized multi-node network. In addition most ISPs have explicit or implicit data transfer limitations - they will throttle down your connection if you exceed a certain transfer volume.

So I fully agree with your assessment that Hearn's and Gavin's plan is horrible - in fact, it's even more horrible than you've shown.
There's no way I will ever support such a fork.

ya.ya.yo!
legendary
Activity: 994
Merit: 1035
History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

The link you provided cites advertised plans boasting hypothetical peak bandwidth possibilities and not real life bandwidth averages.

Additionally, this doesn't address all the concerns:

1) Latency caused by larger blocks incentivizes the centralization of mining pools
2) Not everyone worldwide lives in locations which has bandwidth growing at the same rates
3) Advertised bandwidth rates are not the same as real world bandwidth rates
4) ISPs often put soft caps on total bandwidth transferable per month  on accounts and stunting the user speed to a crawl.  More are no longer advertising unlimited bandwidth per month and setting clear total amount transferable limits and hardcaps with overage charges.
5) Full nodes at home need to compete with the bandwidth needs of HD video streaming used by most users which is getting increasingly demanding. Most people don't want to expend most of their bandwidth on supporting a full node and stop streaming Netflix and or torrenting.
6) Supporting nodes over TOR is a concern
7) Most ISP plans are asynchronous and have much slower upload speeds that are also not growing at the same rates as download speeds.
8 There are interesting attacks possible with larger blocks - http://eprint.iacr.org/2015/578.pdf

Lets look at the historical account of real world bandwidth averages -


http://explorer.netindex.com/maps?country=United%20States

1/2008      5.86 Mbps
12/2008    7.05 Mbps
12/2009    9.42  Mbps
12/2010    10.03  Mbps
12/2011    12.36   Mbps
12/2012    15.4   Mbps
12/2013    20.62   Mbps
12/2014    31.94  Mbps

Thus you can see that even if I were to ignore many parts of the world where internet isn't scaling as fast and focus on the "first world", bandwidth speeds aren't scaling up as quickly as you suggest.
Pages:
Jump to: