Pages:
Author

Topic: Increasing the block size is a good idea; 50%/year is probably too aggressive - page 10. (Read 14297 times)

legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)
Theory and practice diverge here.
A miner can put as many transactions as they like in a block with no fees.
The cost is then replicated across every full node which must store it in perpetuity.
legendary
Activity: 1050
Merit: 1002
I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.

But the proposal isn't to exceed the number of atoms in the universe. It's to increase block size for 20 years then stop. If we do that starting with a 20MB block at 50% per year we arrive at 44,337 after 20 years. That's substantially under the number of atoms in the universe.

The point being, with an exponent it's too easy to overshoot.

How so? You can know exactly what value each year yields. It sounds like you're faulting exponents for exponents sake. Instead, give the reason you feel the resulting values are inappropriate. Here they are:

1: 20
2: 30
3: 45
4: 68
5: 101
6: 152
7: 228
8: 342
9: 513
10: 769
11: 1153
12: 1730
13: 2595
14: 3892
15: 5839
16: 8758
17: 13137
18: 19705
19: 29558
20: 44337
sr. member
Activity: 333
Merit: 252
Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

Actually, I'm not looking for reasons not to grow the block size: I  suggested sub-exponential growth instead, like, for example, quadratic (that was a serious suggestion).

About  the 40% over the 20 years - what if you overshoot, by, say, 10 years?
And as a result of 40% growth over the 10 extra years the max block size grows so much
that it's effectively infinite? ( 1.4^10 ~ 30). The point being, with an exponent it's too easy to
overshoot. Then if you want to solve the resulting problem by another fork, it may be much
harder to reach a consensus, since the problem will be of a very different nature (too much centralization
vs too expensive transactions).

legendary
Activity: 1652
Merit: 2301
Chief Scientist
Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

PS: I got positive feedback from a couple of full-time, professional economists on my "block size economics" post, it should be up tomorrow or Friday.
sr. member
Activity: 333
Merit: 252
Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.


physical parameters have physical limits, which are constants.
So unbounded growth is unsustainable. Even linear growth.
However, with less than exponential growth one can expect it to be negligible
from some point on (that is, less than x% per year for any x).

Looking at the past data and just extrapolating the exponent one sees is a myopic
reasoning: the exponential growth is only due to the novelty of the given technology.
It will stop when the saturation is reached, that is, when the physical limit of the parameter
in question is close.

If you want a concrete example, look at the CPU clock growth over the next few decades.

Quote
Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.
I'm not making predictions on constants that we don't know; but when speaking
about exponential growth it  is not even necessary.  Want to know how fast the exponent
growth? Take your 50% growth, and just out of curiosity  see for which n your (1.5)^n exceeds
the number of atoms in the universe. Gives some idea.  Yes, you can put 1% or  (1.01)^n, the difference is
not important.

Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.



legendary
Activity: 1050
Merit: 1002
Because exponential growth is unsustainable

Not inherently. It depends on the rate of growth and what is growing. For example, a 1% per year addition to bandwidth is exceedingly conservative, based on historical evidence.

it is bound to cap at some point in the near future.

Define 'near future'. Is that 5 years, 10 years, 40? And what makes you say that? It's easy to make a general unsupported statement. Don't be intellectually lazy. Show the basis for your reasoning, please.
sr. member
Activity: 333
Merit: 252
why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Why would we choose linear growth when the trend is exponential growth?


Because exponential growth is unsustainable, it is bound to cap at some point in the near future.
We have no idea at what constant it will reach saturation. Instead  we can try
 a slow growth of the parameter, knowing that it will surpass any constant and thus probably
catch up with the real limit at some point, and hoping that the growth is slow enough to be
at most a minor nuisance after that.
sr. member
Activity: 453
Merit: 254

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the block size to 1GB now and nothing would happen because there aren't that many transactions to fill such blocks.


I had a current cost of $252/hour to spam a 1MB/block. --> $252.000/hour to spam a 1GB block.
With 1 MB block, if an attacker spam the blocks, the users have no way to counter the attack raising the fees they pay. To move the cost  for the attacker to $252.000/hour they should pay 10$ per transaction.
With 1 GB block, if the attacker spam the blocks, the users just need to move from 1 cent to 2 cent and the cost of the attack move from $252K to $504K per hours. At 4 cents per transaction it become $1M per hour.

Remember the cost of spamming the blocks go directly in the pockets of miners. So they can reinvest the money in better bandwidth and storage and move to 2GB blocks, doubling the cost for the attacker.
At $1M per hour it is $100M in four days and $750M in a month and $8.760 trillion per year (more than ten times the miners income today)
legendary
Activity: 3878
Merit: 1193
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.

From the wiki:

Quote
Note that a typical transaction is 500 bytes, so the typical transaction fee for low-priority transactions is 0.1 mBTC (0.0001 BTC), regardless of the number of bitcoins sent.

To spam the 1 MB blocksize takes roughly .2 BTC per block, or 1.2 BTC per hour. That's only $500 per hour.

To spam a 1 GB blocksize takes roughly 200 BTC per block, or 1200 BTC per hour. That's $500,000 per hour!

A 1 GB blocksize is far more costly to attack. We could increase the blocksize to 1 GB now and nothing would happen because there aren't that many transactions to fill such blocks.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?
The cost is not that significant.  Heck, the whole BTC market cap is not that significant.

If there were 6 GB block size bloat per hour?
A financial attack could do this independently.
Miners could do this free-ish.
Small miners would fail, as would all hobby miners.

Full nodes would become centralized, increased 51% risks, etc.
These are just the obvious.  No more decentralisation for Bitcoin.
legendary
Activity: 3878
Merit: 1193
To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development.

Like what? What "new" attack vectors? It is already quite cheap to attack the current 1 MB blocksize. What would it cost to attack a 1 GB blocksize vs the current 1 MB blocksize?
legendary
Activity: 1652
Merit: 2301
Chief Scientist
why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?

Because network bandwidth, CPU, main memory, and disk storage (the potential bottlenecks) are all growing exponentially right now, and are projected to continue growing exponentially for the next couple decades.

Why would we choose linear growth when the trend is exponential growth?

Unless you think we should artificially limit Bitcoin itself to linear growth for some reason. Exponential growth in number of users and usage is what we want, yes?
sr. member
Activity: 333
Merit: 252
doesn't matter how high is the physical limit, with exponential growth (any x% per year) rule it is going to be reached and exceeded, whereupon keeping the rule is just as well as making the max size infinite.


why not linear growth, like  +n MB per block halving, or quadratic like +n MB per n'th block halving?
legendary
Activity: 1050
Merit: 1002
As I think it through 50% per year may not be aggressive.

Drilling down into the problem we find the last mile is the bottleneck in bandwidth:

http://en.wikipedia.org/wiki/Last_mile

That page is a great read/refresher for this subject, but basically:

Quote
The last mile is typically the speed bottleneck in communication networks; its bandwidth limits the bandwidth of data that can be delivered to the customer. This is because retail telecommunication networks have the topology of "trees", with relatively few high capacity "trunk" communication channels branching out to feed many final mile "leaves". The final mile links, as the most numerous and thus most expensive part of the system, are the most difficult to upgrade to new technology. For example, telephone trunklines that carry phone calls between switching centers are made of modern optical fiber, but the last mile twisted pair telephone wiring that provides service to customer premises has not changed much in 100 years.

I expect Gavin's great link to Nielsen's Law of Internet Bandwidth is only referencing copper wire lines. Nielsen's experience, which was updated to include this year and prior (and continue to be inline with his law), tops out at 120 Mbps in 2014. Innovation allowing increases in copper lines is likely near the end, although DSL is the dominant broadband access technology globally according to a 2012 study.

The next step is fiber to the premises. A refresher on fiber-optics communication:

Quote
Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of light through an optical fiber. The light forms an electromagnetic carrier wave that is modulated to carry information. First developed in the 1970s, fiber-optic communication systems have revolutionized the telecommunications industry and have played a major role in the advent of the Information Age. Because of its advantages over electrical transmission, optical fibers have largely replaced copper wire communications in core networks in the developed world. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached internet speeds of over 100 petabits per second using fiber-optic communication.

The U.S. has one of the highest ratios of Internet users to population, but is far from leading the world in bandwidth. Being first in technology isn't always advantageous (see iPhone X vs iPhone 1). Japan leads FTTP with 68.5 percent penetration of fiber-optic links, with South Korea next at 62.8 percent. The U.S. by comparison is 14th place with 7.7 percent. Similar to users leapfrogging to mobile phones for technology driven services in parts of Africa, I expect many places to go directly to fiber as Internet usage increases globally.

Interestingly, fiber is a future-proof technology in contrast to copper, because once laid future bandwidth increases can come from upgrading end-point optics and electronics without changing the fiber infrastructure.

So while it may be expensive to initially deploy fiber, once it's there I foresee deviation from Nielsen's Law to the upside. Indeed, in 2012 Wilson Utilities located in Wilson, North Carolina, rolled out their FTTN (Fiber to the Home) with speeds offerings of 20/40/60/100 megabits per second. In late 2013 they achieved 1 gigabit fiber to the home.
full member
Activity: 210
Merit: 100
Ah, this is where the big boys post...ohh.  Impressive.

As for little me, I am downloading in a Third World country the entire > 50 GB blockchain I guess it is, for my Armory client, and it's fun but has an experimental feel to it.  Even with a 1.5 Mbps internet connection, it's close to 24 hours and I'm only two-thirds done.  I understand that subsequent incremental downloads of this blockchain should be a lot quicker and smaller once the initial download is finished.  I do understand however that Bitcoin transactions can take 1 hour to verify, which is probably related to the size of the blockchain.  The Bobos in Paradise (upper middle class) in the developed countries will not like that; for those off the grid this is a minor quibble.

As for compression of the blockchain, it's amazing what different algorithms can do.  For the longest time the difference between WinZip and WinRAR were trivial, then came 7-Zip, and using whatever algorithm that author uses, the shrinkage is dramatically better.  I can now compress a relational database much more using 7-Zip than WinZip, on a Windows platform.  But there must be some tradeoff; I imagine 7-Zip is more resource intensive and hence should take longer (though I've not seen this).

TonyT
legendary
Activity: 1652
Merit: 2301
Chief Scientist
No comment on this?
Quote
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

That does not address the core of people's fears, which is that big, centralized mining concerns will collaborate to push smaller competitors off the network by driving up the median block size.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.  Where we are guessing we ought acknowledge that.

Yes, that is a good point, made by other people in the other thread about this. A more conservative rule would be fine with me, e.g.

Fact: average "good" home Internet connection is 250GB/month bandwidth.
Fact: Internet bandwidth has been growing at 50% per year for the last 20 years.
  (if you can find better data than me on these, please post links).

So I propose the maximum block size be increased to 20MB as soon as we can be sure the reference implementation code can handle blocks that large (that works out to about 40% of 250GB per month).
Increase the maximum by 40% every two years (really, double every two years-- thanks to whoever pointed out 40% per year is 96% over two years)
Since nothing can grow forever, stop doubling after 20 years.

sr. member
Activity: 453
Merit: 254
In my opinion, the 50% increase per year of the block size is too conservative in the short run and too optimistic in the long run.
If Bitcoin had exponential increase of usage, even usage from fields where it is currently uneconomic to implement a payment service, we will had faster increase in the short run and a slowdown in the long run.

Spamming the blockchain is not a real issue, for me.
If tomorrow we would had 1 GB block (max size) some entity could be able to spam the blockchain with dust transactions, because the minimum fee is about 1 cent. The reaction of the users would be just to raise the fee they pay. from 1 cent to 10 cents, the attacker would need to increase ten times the sum paid to the miners (miners thank a lot for this) to be able to produce transactions with the same fee and priority of the real users.

What would him accomplish? A bigger blockchain? people and large operators can buy truck loads of HDs on the TBytes.
Actually you just don't need to keep all the blockchain on disk, just the last few weeks, months. People could just download or share the previous blocks as they will never change on HW mediums.
Just to be clear, albeit nominally the blockchain could change if someone would dedicate enough time and resources to rebuilt it from the genesys block (with a larger proof-of-work), any change of the chain more than a day/week/month old will ever be rejected.

Large entities will not attack the blockchain:
1) because it is anyway, against some law to break havok in a computer network and rewrote it for nefarious purposes.
2) because governments would need to justify it. They have the monopoly of coercion the monopoly on violence. If they resort to indirect attacks, they are just admitting the threat of violence and the violence itself it is not working against Bitcoin's users. It would amount to start bleeding in a shark infested sea.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.
That is nothing like Gavin's proposal for good reasons.  To answer your question of What would also happen if the block size were increased to 1 GB tomorrow is the introduction of new attack vectors, which if exploited would require intervention to resolve by miners, and development. 
It is not enough to design something that works, we must also design so that it does not become more fragile.

Why not strive for a dynamic limit that prevents the need for future hard forks over the same issue?
Gavin's proposal is "the simplest that could possibly work".

I'll argue that it is just too simple, and too inflexible.

This proposal may be opening Bitcoin to new types of coin killing attacks by assuming that anti-spam fees will always be sufficient to prevent bloating attacks.   Consider that the entire value of all bitcoin is currently less than 1/10th of the worlds currently richest man, and that man has spoken publicly against bitcoin?  When you include wealthy institutions and even governments within the potential threat vector, the risks may become more apparent.  We can not assume Bitcoin's success, and then predicate decisions necessary for that success, on that success having been already accomplished.

If Bitcoin has to change due to a crisis, it ought at least be made better... so that the crisis need not be revisited.  (Hard forks get progressively more challenging in the future).  Design for the next 100s of years, not for the next bubble.  Fix it right, and we fix it once.

Designs ought to have safeguards to avoid unintended consequences and the ability to adjust as circumstances change.
My suggestion is that perhaps we can do better than to simply assume an infinite extrapolation, when there exists a means to measure and respond to the actual needs as they may exist in the future, within the block chain.

50% may be too much in some years, too little in others.  The proposal is needlessly inflexible and assumes too much (an indefinate extrapolation of network resource).  Picking the inflating percentage numbers out of a hat by a small group is what CENTRAL BANKERS do, this is not Satoshi's Bitcoin.

I'm not convinced a crisis necessitating a hard fork is at hand, but I am sure that the initial proposal is not the answer to it.  I look forward to its revision and refinement.  
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.

IBLT encourages good behaviour because you can't successfully publish an IBLT full of transactions which the rest of the network doesn't want, unlike now, when a block could be full of rubbish 1sat transactions from a secret spam generator. The whole point of IBLT is that each node knows (and accepts) most of the transactions in advance, and has them in its mempool. It is only a smallish set of differences which are required from the IBLT when processing it. So the fees market should be helped by this development.
hero member
Activity: 667
Merit: 500
What would happen if the blocksize were increased to 1 GB tomorrow? Pretty much nothing. Miners will always be able to create blocks less than the maximum blocksize.
What would happen if the blocksize were decreased to 1 KB tomorrow? Bitcoin would come grinding to a halt.

Too small blocksize = death to bitcoin.
Too big blocksize = non-issue.

I'd rather see the blocksize too big than too small.

IBLT makes it an issue because there would no longer be a risk/reward tradeoff on tx fees vs propagation delay in building the largest possible blocks. As a result the miner is incentivized to always build the largest possible block to collect maximum tx fees with no propagation risk.
Pages:
Jump to: