Pages:
Author

Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive - page 9. (Read 14297 times)

legendary
Activity: 1050
Merit: 1002
Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab.  
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

We're in luck then, because one advantage of fiber lines over copper is they're not good used for anything other than telecom Smiley

I'm no telecommunications specialist, but do have an electronics engineering background. Raise some issue with fundamental wave transmission and maybe I can weigh in. My understanding is it's easier to install fiber lines, for example, because there is no concern over electromagnetic interference. Indeed, the fiber lines I witnessed being installed a week ago were being strung right from power poles.

However, is such theoretical discussion even necessary? We have people being offered 2Gbps bandwidth over fiber not in theory but in practice in Japan, today.

That's already orders of magnitude over our starting bandwidth numbers. I agree with Gavin that demand for more bandwidth is inevitable. It's obvious all networks are converging - telephone, television, radio, internet. We'll eventually send all our data over the internet, as we largely do now, but in ever increasing bandwidth usage. To imagine progress in technology will somehow stop for no apparent reason, when history is chock full of people underestimating what technological capacity we actually experience is not only shortsighted, it borders unbelievable.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Then... who knows? Every prediction of "this will surely be enough technology" has turned out to be wrong so far.
We agree.
legendary
Activity: 1652
Merit: 2301
Chief Scientist
Designing something to work and designing to not fail are entirely different endeavors and someone qualified for one is not necessarily qualified to even evaluate the other.
Pure coincidence, but I had lunch today with a local developer who will be putting up a building in downtown Amherst. They are planning on running fiber to the building, because they want to build for the future and the people they want to sell to (like me in a few years, when we downsize after my kids are in college) want fast Internet.

If I gaze into my crystal ball...  I see nothing but more and more demand for bandwidth.

We've got streaming Netflix now, at "pretty good" quality.  We'll want enough bandwidth to stream retina-display-quality to every family member in the house simultaneously.

Then we'll want to stream HD 3D surround video to our Oculus Rift gizmos, which is probably another order of magnitude in bandwidth. To every member of the family, simultaneously. While our home security cameras stream to some security center off-site that is storing it as potential evidence in case of burglary or vandalism....

Then... who knows? Every prediction of "this will surely be enough technology" has turned out to be wrong so far.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.

I linked you to the report of Bell Labs achieving 10Gbps over copper wire. Here is the link to them achieving 100 petabits per second over fiber in 2009:

http://www.alcatel-lucent.com/press/2009/001797

Quote
This transmission experiment involved sending the equivalent of 400 DVDs per second over 7,000 kilometers, roughly the distance between Paris and Chicago.

These are demonstrated capacities for these two mediums. The only limiting factors for achieving such rates for individual consumers are physical and economic considerations for building out the infrastructure. Nonetheless the technologies for achieving exponential increase in bandwidth over current offerings is proven. Achieving these rates in practice on a scale coinciding with historical exponential growth of 50% annually, which does take into consideration economic and physical realities, seems well within reason. I'm sure telecommunications specialists would agree.

As a telecommunication specialist, No. I do not agree.

Sure, we were also able to get x.25 and x.75 telecom to run over barbed wire, in the lab.  (There are places in the world that still use these protocols, some of which would deeply benefit from bitcoin in their area.)
The logistical challenges of implementation is not what you find in the lab. 
This stuff has to go out in environments where someone backs up their truck into a cross country line so they can cut it and drive off with a few miles of copper to sell as scrap.  We live in the world, not in the lab.

Designing something to work and designing to not fail are entirely different endeavors and someone qualified for one is not necessarily qualified to even evaluate the other.
legendary
Activity: 1176
Merit: 1020
Either we've gone through the looking glass, or else the goal is that Bitcoin should fail and some alt coin take its place?
Why hard fork Bitcoin to enable microtransactions if only to make them too expensive, as well as add risk and cost and also remove functionality in the process?

Gavin's 2nd proposal also seems worse than the first by the arbitrariness factor.  x20 size first year, for years two through ten x1.4, then stop.  

1) It's not about micro-transactions.  It about the network having enough capacity to handle normal transaction (let's say over $0.01) as adoption grows.

2) It's not arbitrary.  Gavin's revised proposal is better than the first because it is more finely tuned to match the current and projected technological considerations.  Remember, the point of the proposal is not to maximize miner revenue or create artificial scarcity.  It is allow the network to grow as fast as possible while still keeping full-node / solo mining ability withing the reach of the dedicated home user.  That is not an economic question, but rather a technical one.  Gavin and the rest of the core devs are computer experts and are as well equipped to make guesses about bandwidth and computer power growth as anyone.

Lets look at each of the three phases of Gavin's revised proposal.  Step one: raise the MaxBlockSize to 20MB as soon as possible.  That would be a maximum of 87GB per month of chain growth, or 1 TB / per year - easy to store on consumer equipment.  Using myself as an example, I have a 20Mbps cable connection, which would actually be able to handle 1.4 GB every 10 minutes, so 20 MB blocks would utilize just 1/70th of my current bandwidth.  I think most of us would agree 20MB blocks would not squeeze out interested individuals.

Phase 2 is 40% yearly growth of the MaxBlockSize.  That seems entirely reasonable considering expected improvements in computers and bandwidth.

Phase 3 is stopping the pre-programmed growth after 20 years.  This recognizes that nothing can grow forever at 40%, and that our ability predict the future diminishes the farther out we look.  Also, lets image that in years 16 - 20, computation resources only grow by 30% per year, and the network becomes increasingly centralized.  After year 20 network capacity would freeze, but not computer speed growth, so the forces of decentralization would have a chance to catch up.

Gavin - I hope you have enough support to implement this.  You are the best chance for reaching consensus.  And thanks for being willing to lobby on its behalf.  100% consensus is impossible, but 75% - 80% percent should be enough to safely move forward.

Everyone else - remember that without consensus we are stuck at 1 MB blocks.  Gavin's proposal doesn't have to be perfect for it to still be vastly better than the status quo.
legendary
Activity: 1050
Merit: 1002
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.

I linked you to the report of Bell Labs achieving 10Gbps over copper wire. Here is the link to them achieving 100 petabits per second over fiber in 2009:

http://www.alcatel-lucent.com/press/2009/001797

Quote
This transmission experiment involved sending the equivalent of 400 DVDs per second over 7,000 kilometers, roughly the distance between Paris and Chicago.

These are demonstrated capacities for these two mediums. The only limiting factors for achieving such rates for individual consumers are physical and economic considerations for building out the infrastructure. Nonetheless the technologies for achieving exponential increase in bandwidth over current offerings is proven. Achieving these rates in practice on a scale coinciding with historical exponential growth of 50% annually, which does take into consideration economic and physical realities, seems well within reason. I'm sure telecommunications specialists would agree.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.

Either we've gone through the looking glass, or else the goal is that Bitcoin should fail and some alt coin take its place?
Why hard fork Bitcoin to enable microtransactions if only to make them too expensive, as well as add risk and cost and also remove functionality in the process?

Gavin's 2nd proposal also seems worse than the first by the arbitrariness factor.  x20 size first year, for years two through ten x1.4, then stop.  

Is there some debate method outside Lewis Carroll where you get increasingly absurd until folks stop talking with you, and then you declare victory?
Lets stop this painting-the-roses-red stuff and get back to serious discussion if we want to increase the block size limit at all.
sr. member
Activity: 333
Merit: 252
so if the bandwith growth happens to  stop in 10 years

Why would it? Why on earth would it???

It's all about predicting the value of some constants in the future.
I've no idea what they would be in 10 years.
I'm sure there are people who have much better idea than I.
While (I think we'd all agree  that) predicting technology decades ahead is hard,
 it is not impossible that a group of specialists,  after a thorough discussion, could
get the prediction about right.
May be we should all bet on them to make it. (No sarcasm intended.)

However it seems reasonable to me to try to find a solution that would involve
predicting fewer constants, and minimize the impact of not getting these few constants
right.
legendary
Activity: 3878
Merit: 1193
Again, what is the point of minimum fee if the transaction isn't propagated? Miner-spammer includes the fee payable to himself. What's the point of that exercise?

Ok, good point, thanks.

It would have to be a massive miner bloating only his own blocks he solves. Why would someone do that? It's a very limited and not very interesting attack as that is possible today yet doesn't happen.
legendary
Activity: 2128
Merit: 1073
You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.
Again, what is the point of minimum fee if the transaction isn't propagated? Miner-spammer includes the fee payable to himself. What's the point of that exercise?
legendary
Activity: 3878
Merit: 1193
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

You're thinking of the current network. We're talking about a hard fork. A hard fork could require all transactions to have some sort of minimum fee included.
legendary
Activity: 3878
Merit: 1193
I'll let Satoshi speak to that: 
Free transactions are nice and we can keep it that way if people don’t abuse them.
Lets not break what needn't be broken in order to facilitate micropayments (the reason for this proposal in the first place, yes?)

I agree with Satoshi. You pointed out that free transations can be abused, so lets eliminate them when we adjust the max blocksize.
legendary
Activity: 2128
Merit: 1073
And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.
How would you enforce those non-free transactions? If the transaction spammer is the miner then he can include any fee whatsoever because he pays himself. The only cost for him is that the coins are frozen for about 100 decaminutes.

Are you thinking of making the validity of the block dependent on how well the transactions in it were propagated over the network? I don't think this is going to work without a complete overhaul of the protocol.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.
I'll let Satoshi speak to that: 
Free transactions are nice and we can keep it that way if people don’t abuse them.
Lets not break what needn't be broken in order to facilitate micropayments (the reason for this proposal in the first place, yes?)
legendary
Activity: 3878
Merit: 1193
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.

And the rest of the miners are free to ignore a block like that. You have yet to convince me there's a problem. Miners can fill blocks with worthless free transactions today.

Then maybe the problem isn't a large maximum blocksize, but the allowance of unlimited feeless transactions. They are not free for the network to store in perpetuity, so why not eliminate them?

Eliminate free transactions and eliminate the maximum blocksize. Problem solved forever.
legendary
Activity: 1050
Merit: 1002
so if the bandwith growth happens to  stop in 10 years

Why would it? Why on earth would it???

Look, Jakob Nielsen reports his bandwidth in 2014 is 120Mbps, which is around the 90Mbps figure Gavin mentions for his own calculations. Let's use 100Mbps as a "good" bandwidth starting point which yields:

1: 100
2: 150
3: 225
4: 338
5: 506
6: 759
7: 1139
8: 1709
9: 2563
10: 3844
11: 5767
12: 8650
13: 12975
14: 19462
15: 29193
16: 43789
17: 65684
18: 98526
19: 147789
20: 221684

Researchers at Bell Labs just set a record for data transmission over copper lines of 10Gbps. So we can use that as a bound for currently existing infrastructure in the U.S. We wouldn't hit that until year 12 above, and that's copper.

Did you not read my earlier post on society's bandwidth bottleneck, the last mile? I talk about society moving to fiber to the premises (FTTP) to upgrade bandwidth. Countries like Japan and South Korea already have installed this at over 60% penetration. The U.S. is at 7.7% and I personally saw fiber lines being installed to a city block a week ago. Researchers at Bell Labs have achieved over 100 petabits per second internet data transmission over fiber-optic lines. Do you realize how much peta is? 1 petabit = 10^15bits = 1 000 000 000 000 000 bits = 1000 terabits

That's a real world bound for fiber, and that's what we're working toward. Your fears appear completely unsubstantiated. On what possible basis, given what I've just illustrated, would you expect bandwidth to stop growing, even exponentially from now, after only 10 years?!?
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.
From a security perspective, the more useful something is, the more risk it tends to have.
Hard forks today are much easier than they will be later, there are only a couple million systems to update simultaneously with an error free code change and no easy back out process.
Later there will hopefully be a few more systems, with more functionality and complexity.
This is the reason I maintain hope that a protocol can be designed to accommodate the needs of the future with less guesswork/extrapolation.  This is not an easy proposition, and it is not one with which most are accustomed to developing.  This is because it is not enough to make something that works, we need something that can't be broken in an unpredictable future.

Whatever the result of this, we limit the usability of Bitcoin to some segment of the world's population and limit the use cases.
2.0 protocols have larger transaction sizes.  Some of this comes down to how the revenue gets split, with whom and when.  Broadly the split is between miners capitalizing on the scarce resource of block size to exact fees, and the Bitcoin protocol users who are selling transactions.

Block rewards are mapped out to 2140.  If we are looking at 10-20 years ahead only, I think we can still do better.

If we start with Gavin's proposal and set a target increase of 50% per year, but make this increase sensitive to the contents of the block chain (fee amounts, number of transactions, transaction sizes, etc) and adjust up or down the increase in maximum size based on actual usage and need and network capability, we may get some result that can survive as well as accommodate the changes that we are not able to predict.

50% may be too high, it may be too low, 40% ending after a period likewise, maybe high maybe low.
The problem is that we do not know today what the future holds, these are just best guesses and so they are guaranteed to be wrong.

Gavin is on payroll of TBF, which is primarily the protocol users and somewhat less represented by miners.  This is not to suggest that his loyalties are suspect, I start with the view that we all want what is best for Bitcoin, but I recognize that he may simply be getting more advice and concern from some interests and less from others.  All I want is the best result we can get, and have the patience to wait for that.  After all, how often do you get to work on something that can change the world?  It is worth the effort to try for the best answer.

JustusRanvier had a good insight on needing bandwidth price data from the future, which is not available in the block chain today, but ultimately may be with 2.0 oracles.  However depending on those for the protocol would introduce other new vulnerabilities.  The main virtue of Gavin's proposal is its simplicity, its main failures are that it is arbitrary, insensitive to changing conditions and inflexible.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
A miner can put as many transactions as they like in a block with no fees.

This is solved by implementing IBLT as the standard block transmission method, although this is not a short-term goal.

A miner can get his or her IBLT  blocks accepted only if the vast majority of the transactions in it are already known to, and accepted as sensible, by the majority of the network. It shifts the pendulum of power back towards all the non-mining nodes, because miners must treat the consensus tx mempool as work they are obliged to do. It also allows for huge efficiency gains, shifting the bottleneck from bandwidth to disk storage, RAM and cpu (which already have a much greater capacity). In theory, 100MB blocks which get written to the blockchain can be sent using only 1 or 2MB blocks on the network. I don't think many people appreciate how fantastic this idea is.

The debate here is largely being conducted on the basis that the existing block transmission method is going to remain unimproved. This is not the case, and a different efficiency method, tx hashes in relayed blocks, is already live.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

Gavin, I hope you proceed with what you think is best.
sr. member
Activity: 333
Merit: 252

10: 769
...
20: 44337

so if the bandwith growth happens to  stop in 10 years, then in 20 years you end up
with max block of 44337 whereas the "comfortable" size (if we consider 1MB being comfortable
right now) is only 769.
I call that "easy to overshoot" because predicting technology for decades ahead is hard,
and the difference between these numbers is huge.
hero member
Activity: 709
Merit: 503
Perhaps I am stating the obvious:

1MB/10m = 1,667B/s

Do not try running Bitcoin on a system with less bandwidth, i.e. even 9600 baud isn't enough.  Hmm, what does happen?  Do peers give up trying to catch it up?

A (the?) serious risk of continuing with a block size that is too small: If/when the block size bottlenecks Bitcoin then the backlog of transactions will accumulate.  If the inflow doesn't eventually subside long enough then the backlog will accumulate without limit until something breaks and besides which who wants transactions sitting in some queue for ages.

What functional limit(s) exist constraining the block size?  2MB, 10MB, 1GB, 10GB, 1TB, 100TB, at some point something will break.  Let's crank up the size on the testnet until it fails just to see it happen.

The alternative to all this is to reduce the time between blocks.  Five minutes between blocks gives us the same thing as jumping up to 2MB.

Why do we fear hard forks?  They are a highly useful tool/technique.  We do not know for sure what the future will bring.  When the unexpected comes then we must trust those on the spot to handle the issue.
Pages:
Jump to: