Pages:
Author

Topic: Economics of block size limit - page 2. (Read 4475 times)

legendary
Activity: 3878
Merit: 1193
February 28, 2013, 11:07:16 PM
#55

Bitcoin is not suitable for all possible transaction types that people have need for but it can do more than 7 transactions a second, and it needs to do more than that. Otherwise we will at some point hit a brick wall in Bitcoin adoption.

Exactly.

Also, that point is within a year, based upon the long-term growth rate:

http://blockchain.info/charts/n-transactions?timespan=all&showDataPoints=false&daysAverageString=7&show_header=true&scale=1&address=

I calculate 345,000 per day as the level where 1Mb block saturation occurs.

I hesitate to call SatoshiDICE spam or dust, but the vast majority of those transactions are small transfers going back and forth between the same 2 addresses. It would be a very bad idea if we created a hard fork just so people could transfer one BTC back and forth constantly.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 28, 2013, 07:22:54 PM
#54

Bitcoin is not suitable for all possible transaction types that people have need for but it can do more than 7 transactions a second, and it needs to do more than that. Otherwise we will at some point hit a brick wall in Bitcoin adoption.

Exactly.

Also, that point is within a year, based upon the long-term growth rate:

http://blockchain.info/charts/n-transactions?timespan=all&showDataPoints=false&daysAverageString=7&show_header=true&scale=1&address=

I calculate 345,000 per day as the level where 1Mb block saturation occurs.
legendary
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
February 25, 2013, 12:58:36 PM
#53
thank you for this.  thus, i would suggest making any changes to block size only if absolutely necessary.

I don't think many here is suggesting we do a hard fork unless absolutely necessary. Thing is, it's very likely to become necessary at some point. It isn't necessary now, or in a while yet, but we anticipate it will be in the future.

It really doesn't matter even if Bitcoin doesn't become the jack of all trades (which I think it won't because the blockchain structure is just too heavy for doing everything), it's just that the usage of it in any mainstream capacity will become fairly impossible for anything other than the transfer of very large amounts, unless this is eventually changed. 7 transactions per second is simply not enough.

It's extremely important for people to stop looking at this like we either have Bitcoin as this very rigid and inflexible system that can store value well, or we "try to scale it infinitely and risk safe value storage". The real issue is anything but that. There is a clear middle ground which is what we should aim for.

Bitcoin is not suitable for all possible transaction types that people have need for but it can do more than 7 transactions a second, and it needs to do more than that. Otherwise we will at some point hit a brick wall in Bitcoin adoption.
legendary
Activity: 1764
Merit: 1002
February 25, 2013, 12:50:20 PM
#52
It is crucial to understand the concept and, yes, economic impact of a hard fork before even approaching the economic analysis of changing the max block size.

A hard fork is a significant event that knocks legitimate users off the network, makes coins unspendable, or potentially makes the same coins spendable in two different locations, depending on whether or not you're talking to an updated node.

It is, to pick a dramatic term, an Extinction Level Event.  If done poorly, a hard fork could make it impossible for reasonable merchants to trust the bitcoins they receive, the very foundation of their economic value.

Furthermore, a hard fork is akin to a Constitutional Convention:  a hard fork implies the ability to rewrite the ground rules of bitcoin, be it block size, 21M limit, SHA256 hash, or other hard-baked behavior.  Thus, there is always the risk of unpredictable miners, users and devs changing more than just the block size precisely because it makes the most engineering sense to change other hard-to-change features at the time of hard-fork.

It is a nuclear option with widespread economic consequences for all bitcoin users.



thank you for this.  thus, i would suggest making any changes to block size only if absolutely necessary.
full member
Activity: 200
Merit: 104
Software design and user experience.
February 25, 2013, 12:16:46 PM
#51
Could you please provide a proof for your assertion?

1. Do you seriously suggest that you are better protected after 9/11? Considering amount of non-justified warfare in muslim world for the last 10 years, I would think that any US citizen is now more vulnerable to random terrorist attacks than before. Also, everybody knows how efficient TSA is at finding actual bad guys.

2. That unlimited block size is a path for massive all-destroying attack. In other words, what is the recipe to bring down the network when block size is not limited? Also, I'm really interested how this situation is different from, having a powerful miner who does not include any useful transactions in his blocks.

Also, do not forget that Bitcoin operations were never based on good will of people how endlessly cooperate in the name of common good. Everyone is running his node for his own benefit since the inception. And anyone who wants to play by different rules, is free to do so and some people do. The whole network never needed any appeal to "common good" and "lets all think for the future".

When each individual user weights risk of theoretical flood-attack with a higher block size limit against his actual measurable costs transactions, the decision will be made. I cannot say how much increase will be justified by vast majority: 2 Mb, 10 Mb or 1000 Mb, this will depend on the lowest common denominator for 99% of users. The remaining 1% will have to play along. Just like some people who do not like 6 Gb blockchain, they have to either accept it, or trust someone else to keep it. The fact that Bitcoin is decentralized does not mean that everybody must always agree with each other on everything. If vast majority expects benefit from change, the change will come ;-)
legendary
Activity: 2940
Merit: 1090
February 25, 2013, 10:20:04 AM
#50
Much the same probably applies to the security.

Before the twin towers got knocked down, did people vote in the removal of of what liberties or privacies they had not already lost by then? Or did it take a catastrophe  to make an entire nation or maybe even continent finally upgrade its infrastructure enough to hopefully make it invulnerable to that kind of attack going forward?

Of course some people are not going to want to be invulnerable to a too large block, as they are going to want to be the attacker. Others will not because they cannot afford the far far too excessively huge setting the fat cats think is fine for a max block, so simply will not put enough resources in place to handle it.

When the attack comes, paralysing the whole nation we'll be going like damn just leaving the max block size at a reasonable level would have nipped that in the bud. Prevented it outright.

We really kind of need it to be achievable all around, not so high that tons of pipes never actually bother to even try to really be capable of it because it just seems like some number the megacorps put in to blow them away with, not a reasonable defensive measure at all. If it is there so the megacorps can blow you away you might as well just wait for them to blow you away instead of blowing yourself away voluntarily ahead of time by trying to upgrade to prepare for it when you aren't even getting anywhere near enough transactions yet to even fill the current limit.

Remember, final settlement with, for example, paypal, takes six months. So having transactions take up to six months to finally get into the chain is a good thing, its by design in networks like paypal.

-MarkM-
full member
Activity: 200
Merit: 104
Software design and user experience.
February 25, 2013, 09:42:25 AM
#49
I was never saying anything about the *schedule* of changing the block limit. I was saying that as soon as limit is limiting, it will be raised. It may happen in 10 years or 10 months. Nobody knows and I don't care. Even if today miners+users agree to some schedule, if they hit the limit much sooner than the next increase, they will vote to raise the limit earlier.

Unless you have some sort of function of number of transactions (like difficulty is function of number of blocks, not time). But the more complex schedule you propose, the more unreliable it will be to implement and less probable to vote it in.
legendary
Activity: 2940
Merit: 1090
February 25, 2013, 08:53:13 AM
#48
Also, think of it this way: imagine year 1998 when everything was slower. The Bitcoin would have artificial limit of 50 Kb per block. Does it look that everyone would prefer having this limit today, pay $50 in transaction fees and pass all sub-$1000 transactions through escrows? Of course not. The same goes for 1 Mb limit today. In 5 years it will look just silly.

1998 : 00.05 MB (50k)
2002 : 00.10 MB
2006 : 00.20 MB
2010 : 00.40 MB
2014 : 00.80 MB

Hey, you're right, 50k in 1998 with the 4 years thing we're only a little ahead of schedule, but are nowhere near hitting the max. Cool, start the 4 year doubling way back in 1998 then.

If we start it now though, we could instead end up with

2014 : 02.00 MB
2020 : 04.00 MB
2024 : 08.00 MB
2028 : 16.00 MB

So at 2028 and onward we'd be starting into some seriously large increases in size.

As you said, miners are free to voluntarily use smaller if they choose.

-MarkM-
full member
Activity: 200
Merit: 104
Software design and user experience.
February 25, 2013, 08:20:52 AM
#47
People often overlook two things:

1. When there are more transactions that blocks can handle, they will not be dropped, but will go elsewhere. Namely, through some escrows. So instead of congesting bandwidth in some part of the network, they will take up bandwidth in some other part of the network. It's just a question of who will get paid for it: miners or escrows.

2. Bandwidth is not a static environment that we happen to float in. It is a series of tubes Wink that are owned by someone and paid by people who use them. If someone wants to send or receive more data, he will pay for it. If he is not willing to pay more than $X, he will not send/receive more than Y bits/s. So there is no question of "lets save precious bandwidth", everyone decides on his own.

It is absolutely irrelevant what you think about how security "as a whole" is achieved through fees and mining profits. Miners have interest in keeping transactions validated and blocks pushed to *other* miners as quick as possible. Users and escrows are interested in having their transactions validated as quick and as cheaply as possible. So it is irrelevant what you, independent non-mining "validator", thinks about your own bandwidth. If you cannot keep up with the miners, no one will notice. Yes, maybe security will be worse without you. But it does not matter, because people will continue doing what they do as long as they think the security is "good enough". It is absolutely possible in the future to have some miners build their own optical fiber channels between each other to provide the fastest validation possible for biggest blocks possible. Everyone else will naturally join the network to read the latest data from it at their own pace with their own bandwidth.

In fact, this happens already. If an owner of mining equipment wants to avoid issues with bandwidth and orphaned blocks, he simply joins the mining pool. So you end up having small number of well-connected miners who actually collect and validate transactions, while everyone else who has invested in hardware can use slower bandwidth and perform hashing only, without a need to send/receive blocks. If however, the costs of bandwidth are lower for miner than pool fees, then he will mine on his own. It is pure economics that will determine how the bandwidth is used.

Some people worry that this may end up as a single global super-mining-pool. It will not, because the pool by itself is useless: it is actual miners who connect to the pool make it valuable. They, of course, want to hedge their risks, so they will never end up in a huge pool. So there is always an opportunity for competitors to establish their pools. And then every pool owner will strongly desire the best connectivity to his competitors to reduce his own costs.

I predict that the limit will either be abolished completely (less probable, considering amount of FUD), or bumped from 1 MB to 2 MB, then to 4 MB etc. as long as demand for transactions increases. When the bandwidth will get in the way of miners, they will reduce block size voluntarily regardless of the hard limit.

Also, think of it this way: imagine year 1998 when everything was slower. The Bitcoin would have artificial limit of 50 Kb per block. Does it look that everyone would prefer having this limit today, pay $50 in transaction fees and pass all sub-$1000 transactions through escrows? Of course not. The same goes for 1 Mb limit today. In 5 years it will look just silly.



legendary
Activity: 1232
Merit: 1094
February 25, 2013, 06:35:15 AM
#46
1) leave it alone
2) vote in the block chain

3) Base it on some metric

Adding a formal way to measure orphans would be one way to see if the network can handle the current block size.  This could be accomplished by adding another field into the header to list orphans (maybe a merkle root), since the last difficulty adjustment. 

That isn't perfect, since a majority of miners might refuse to include blocks that list miners (assuming miners want to increase the size, even though there are orphans).
legendary
Activity: 2940
Merit: 1090
February 25, 2013, 02:28:22 AM
#45
(this would let it go up and down).

Can we really not get rid of any going back down? If it can come down later it ought not to have gone up in the first place.

-MarkM-
legendary
Activity: 1064
Merit: 1001
February 25, 2013, 02:22:44 AM
#44
If so then we are down to determining what percent to increase by per referendum and what percent yes vote in the referendum is required to cause that increase...Maybe frequency of referendi too.

The nitpicking details of what percentages to use are just constants. It is still a voting system. I'm sure we could come up with an infinite set of variations on the voting theme. Maybe instead of a yes/no vote everyone just votes on what they think the new size should be (this would let it go up and down). Then we take some sort of smoothed median? Who knows.

Maybe someone can come up with a better idea than voting. In my mind there are only two workable choices for addressing max block size:

1) leave it alone
2) vote in the block chain

As far as the percentages go, the amount of increase should be small enough so that we always err on the side of being conservative with the increase. It is always preferred to raise the block size limit by too little rather than too much. In other words, better that the fees are too high (more security) than too low. As far as the 90% well that would be a politically driven choice.
legendary
Activity: 2940
Merit: 1090
February 25, 2013, 02:18:06 AM
#43
If so then we are down to determining what percent to increase by per referendum and what percent yes vote in the referendum is required to cause that increase.

Maybe frequency of referendi too. (Faux Latin! Or is that real Latin? (By some sheer fluke.))

-MarkM-
legendary
Activity: 1064
Merit: 1001
February 25, 2013, 02:10:44 AM
#42
Good point, okay, how about interleave the two adjustments, halving the subsidy halfway through each doubling of the size period aka doubling the size halfway through each halving of the subsidy period?

I think that any fixed block size inflation schedule or any inflation schedule that is based on static measurement of block chain history, would require an oracle to tune correctly. Since we do not have an oracle, it is guaranteed that the block size will either grow too quickly, or grow too slowly. The consequence of growing too slowly is that we incurred the cost of a hard fork for nothing (although it can be argued that it is better than having no change in the limit).

But the consequence of growing too quickly is severe, a large drop in fees. Even if you stagger the subsidy cut and the block size increase, there is a mismatch in the two growth curves.

Each successive cut in the subsidy has an exponentially decreasing effect on the network. Consider the last cut, going from 2 satoshis down to 1. It is practically nothing. Most of the coins were created in the first 4 years. Over the next 4 years we will create 25% of all the coins that will ever be created. Let's pretend that the subsidy will end in 40 years. By that time the block size will be 512 megabytes. You have an inverted curve compared to the money supply. Most of the block size increase will happen in the last 4 years of the inflation schedule, at a time when very few coins are created. Just when we need transaction fees the most (at the end of the inflation schedule), we would create  the largest decrease in the market for fees ever: each doubling of the block size sets a new record for reducing scarcity.

An exponential growth characteristic seems wrong. Now you might say, well why don't we make it geometric? Say, a percentage increase each time? The problem, while not as severe, still remains: you need an oracle to tune the percentage correctly. Even if we knew the "god percentage" (globally optimum growth value), we could have oscillations where sometimes the optimum percentage is too little, and sometimes its too much.

At the same time, this analysis completely ignores the effect of bandwidth. If we take that into account, every doubling of the block size will double the minimum bandwidth requirement. If transmitting a block took 5 seconds for the average node, it would take 42 minutes after 40 years (using the example above). Are we going to bet Bitcoin's existence on average bandwidth increasing by a factor of 500 in 4 decades?

Again, I think we should drop the idea of a fixed schedule of block size increases. Furthermore, for the analysis I gave elsewhere we should drop the idea of any sort of static measuring system to adapt the block size. I think that so far only the voting system is resistant to all the problems I mentioned.

The optimum method for increasing the block size would have to factor in the exchange rate, people's tolerance for fees, and the effects of the increased bandwidth requirement. I do not believe this can be modeled in an automated fashion. Therefore, we solve the problem in parallel by letting miners try to solve this complex equation for their own use-case and vote on the result.
legendary
Activity: 2940
Merit: 1090
February 25, 2013, 01:58:10 AM
#41
Good point, okay, how about interleave the two adjustments, halving the subsidy halfway through each doubling of the size period aka doubling the size halfway through each halving of the subsidy period?

-MarkM-
legendary
Activity: 1064
Merit: 1001
February 25, 2013, 01:38:51 AM
#40
doubling size when we halve subsidy also looks like it will be putting in place another four years ahead a value that will also be a slam-dunk by then

One could argue that doubling the block size on every halving of the subsidy is exactly the wrong thing to do at the wrong time.

When the subsidy gets cut in half all of the miners that were previously at the margins of profitability are now kicked off the network (since they would lose money by running their rigs). A drop in the hash rate is the result. Combine the halving of the subsidy with the inevitable reduction in transaction fees (since there is no more scarcity) and you have the perfect storm that could create the perverse financial incentive to use newly idle hash power to attack the network rather than secure it.

This doesn't even take into account the possibility of losing additional miners who have insufficient bandwidth on a block size increase (a debate that has not been resolved).

Please, for the love of Bitcoin, drop this idea of doubling the block size on every halving!!!!

For these reasons, I feel that it is better to just leave the max block size alone versus doubling the block size on every halving.
legendary
Activity: 2940
Merit: 1090
February 24, 2013, 04:57:06 AM
#39
Couldn't you also argue that if we run up to the limit... and it is seen that it is to everyone's benefit that a raise would be good.... doesn't that mean that we don't need to come up with some fancy /adaptive dynamic limit(since if we can do it once we can do it again)?

Yes.

Nonetheless, the double-when-we-halve is such a slam-dunk we might as we might as well put it in, as you say we can always hard fork later. Hard forks get easier and easier, not harder and harder, as Gavin et al perfect the techniques for doing them smoothly.

Plus also, aren't massive swarms of independent ordinary typical nodes typically more agile, in general, than vested interests, old money, political parties, lobby groups, megacorporations and such (the "forks are politically impossible!" crowd) anyway already, in addition to that crowd being precisely the kind of folks/entities we desire to outmanoever / outagile ?

-MarkM-
legendary
Activity: 2940
Merit: 1090
February 24, 2013, 04:47:43 AM
#38
I think that much is a slam-dunk.

But, while we are in there, doubling size when we halve subsidy also looks like it will be putting in place another four years ahead a value that will also be a slam-dunk by then or will motivate us really well to make damn sure that it is a slam dunk by then; and even four years after that the next value will either be pathetically too slow an increase or yet another slam dunk or just another damn good reason to make sure we manage to make it yet another slam dunk.

So I still think just double max block size when you halve subsidies looks like it would be much better than any fixed constant we could put in when we hard-fork.

-MarkM-
sr. member
Activity: 294
Merit: 250
February 24, 2013, 01:09:05 AM
#37
To put things into perspective what will a 2 MB limit offer vs a 1MB in terms of theoretical scalability? I guess you could say "twice as much" . What would that be like compared to say paypal for instance?

Couldn't you also argue that if we run up to the limit... and it is seen that it is to everyone's benefit that a raise would be good.... doesn't that mean that we don't need to come up with some fancy /adaptive dynamic limit(since if we can do it once we can do it again)? If we can get a consensus and fork increasing the limit by 1MB then that is good news for the future of bitcoin I think.
legendary
Activity: 2940
Merit: 1090
February 23, 2013, 09:27:24 PM
#36
Yeah but if bitcoins are worth $120 or more each by then, they are maybe just stupid not to have invested in some more bandwidth by then? Or some more ASICs maybe if bandwidth is low due to low hashpower resulting in low income?

Its still four times whatever they are making now with the gear they already have, divided by the block halving by then so double what they are already making now.

So lets get those exchange rates up! Smiley

-MarkM-
Pages:
Jump to: