Author

Topic: 60% of hashrate including 2 major exchanges agree to raise block size to 8MB (Read 3859 times)

hero member
Activity: 588
Merit: 500
Will Bitcoin Rise Again to $60,000?
This could deff get interesting to say the least. I see this getting pretty ugly.
sr. member
Activity: 433
Merit: 267
But if inflation stopped tomorrow then the value of that 0.1BTC per block would surely skyrocket.  Or maybe I'm missing something.  I'm not suggesting that the block size should be large enough to accomodate all transactions, but what about a block size which accomodates transactions with at least a certain fee rate in at least a certain time (on average).
Sure it's possible that the price of BTC could go up by 125x and end up funding the network at equivalent rates, but increasing block sizes would mean fees would have to increase even higher. That's quite a gamble, and not one that's easy to recover from if the assumptions are incorrect.
I'm not saying that you want blocks to accommodate  all transactions, but that's certainly what some people have been suggesting, or suggesting wild exponential increases.

But this is why we want pruning, right?  Won't a lot of the storage issues be dealt with in the next release with pruning?  Or, again, maybe I'm misunderstanding (it happens too often, ha!).
Yes, there's going to be changes that will improve the current situation, for sure. Does that mean we take those gains and then turn around and put the blockchain into the same situation all over again? Maybe it does, but this could be better reasoned about once where in a comfortable position to make these kinds of decisions.


As an aside I'm reading a discussion on the mailing list and I'm finding quite a bit of discussion about the same sort of stuff I'm mentioning here. It would be worth checking out if your interested;
http://sourceforge.net/p/bitcoin/mailman/bitcoin-development/thread/554A91BE.6060105%40bluematt.me/#msg34090292
legendary
Activity: 1456
Merit: 1083
I may write code in exchange for bitcoins.

Our current mining infrastructure is rewarded 12.5 BTC per block in inflation and about 0.1 BTC per block in fees. That would mean that if inflation stopped tomorrow, mining  would have to be cut more than a hundred fold. In this sort of environment, is it appropriate to be talking about methods that would reduce fees? It's particularly obscene to insist that the block size should be large enough to accommodate all desired transactions, as this implicitly means near zero  transaction fees as well.
But if inflation stopped tomorrow then the value of that 0.1BTC per block would surely skyrocket.  Or maybe I'm missing something.  I'm not suggesting that the block size should be large enough to accomodate all transactions, but what about a block size which accomodates transactions with at least a certain fee rate in at least a certain time (on average).
Quote

The blockchain right now is about 36GB in size. This takes many hours to download even on a solid connection in the western world. For this reason, most users don't run a fully verifying Bitcoin node, and the network is the worse for it. While this is annoying, it's still tolerable for the average PC. How many people are going to bother running a node when they need to buy a dedicated hard drive for it, or a more expensive internet connection? Also, increasing block sizes carries an exponential cost to the network that also needs to be covered by fees.
  But this is why we want pruning, right?  Won't a lot of the storage issues be dealt with in the next release with pruning?  Or, again, maybe I'm misunderstanding (it happens too often, ha!).
sr. member
Activity: 433
Merit: 267
I understand that it's good for blocks to be as big as we can get away with, without significantly harming the decentralized Bitcoin network. I'm not convinced, though, that 1MB is too small or that some number greater than that is objectively better.
There is evidence that it's too big as it is; We aren't seeing enough transaction fees to cover network costs as inflation diminishes, and the blockchain is fairly awkward to handle.

Our current mining infrastructure is rewarded 12.5 BTC per block in inflation and about 0.1 BTC per block in fees. That would mean that if inflation stopped tomorrow, mining  would have to be cut more than a hundred fold. In this sort of environment, is it appropriate to be talking about methods that would reduce fees? It's particularly obscene to insist that the block size should be large enough to accommodate all desired transactions, as this implicitly means near zero  transaction fees as well.

The blockchain right now is about 36GB in size. This takes many hours to download even on a solid connection in the western world. For this reason, most users don't run a fully verifying Bitcoin node, and the network is the worse for it. While this is annoying, it's still tolerable for the average PC. How many people are going to bother running a node when they need to buy a dedicated hard drive for it, or a more expensive internet connection? Also, increasing block sizes carries an exponential cost to the network that also needs to be covered by fees.

So when people are asking for an increase in block size, they are asking for lower transaction fees, while imposing a larger cost on the network, and making it more onerous to run a node, in an environment where fees are too low and the quantity of nodes has diminished.

We'd be in a much better position to argue about increasing block sizes if fees more closely matched or surpassed inflation, and if a Bitcoin node were trivial to run. With time, both of these things may be true. Prudence would suggest waiting until that time.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
One thing I saw from the "stress test" was that it's far to easy for a joker to backlog the network quite a bit.  Now, as you say, increasing the block size from 1MB to 2MB certainly wouldn't risk armageddon.  But I'd go further and say that if such a small change increases the price of perpetrating armageddon (ie, a stress-test scenario) by 100%, that may be a real gain in robustness for all bitcoin users.  What I'm trying to suggest is that it seems to me that the coinwallet people were able to pull off what they pulled off far too cheaply.  I think making that kind of manourver more expensive could be a real asset to the network.

Exactly. From a starting point of an average block size over one week (ABS) at 400KB they created a lot of impact. Consider that ABS will unlikely ever exceed 80% of 1MB because miners will often create small or empty blocks (even if thousands of legitimate fee-paying tx are backing up). So, it does not take a math guru to see that when the ABS is 70% then a bunch of redditards could spam the network 4x more effectively than today for the same effort. When the ABS is 80% of 1MB then any disturbance, whether 1Sochi, 1Enjoy, Dice site bots, Greek collapse news, could ramp volumes making Bitcoin a joke for thousands of users and the world's press. We are sleep-walking into a nightmare.

And the fastest way to get a dog-leg-down move in the chart of full node counts? Execute a fast hard-fork in a few weeks under emergency conditions.
Just brilliant! /sarc
legendary
Activity: 1456
Merit: 1083
I may write code in exchange for bitcoins.
@DumbFruit,

I think I understand the metaphor you're using and your analysis of the costs of increased transactions to the network as a whole vs the costs to the user of a network which is very expensive to send transactions on.  However, I think you're being a little bit extreme and that it's quite possible that a lot of the problems could be eliminated by taking a middle way.  Why shouldn't block size increase a little bit?  Pruning technology should eliminate a lot of the costs to the network which are parallel to your blood swelling analogy.  If we can prune blocks and set a dynamic limit on block sizes such that a target confirmation time on a well-formed tranasaction with a "standard" fee is acheived on average, wouldn't this fit with the way we do things w.r.t. difficulty?  Other aspects of the network have these dynamic controls, it seems to me that block-size could do this as well and we'd be on the way to worrying about something else.

Sure, it's a bit extreme. It's not like increasing the block size from 1MB to 2MB will usher in Armageddon. As others have pointed out many times, the progress of economies and technology could make it so a blockchain of a larger size in the future would be less burdensome on the network than the current blockchain today.

That's all well and good, the issue that I would like to stress is that in all scenarios a centralized agency will always be better equipped to handle large amounts of transactions quickly and cheaply. Bitcoin will never be able to out-compete them on that field.
But like all things, these qualities are gradient.  If bitcoin isn't able to handle some volume of transactions for some amount of quickness and cheapness then it's going to be useless for any purpose.  I think you're corrrect to take the mentality of "don't try to be all things to all people" or "focus on your strengths", etc; but again, if we focus on our strengths to the point of ignoring our weaknesses altogether then that's not really a winning strategy either.

One thing I saw from the "stress test" was that it's far to easy for a joker to backlog the network quite a bit.  Now, as you say, increasing the block size from 1MB to 2MB certainly wouldn't risk armageddon.  But I'd go further and say that if such a small change increases the price of perpetrating armageddon (ie, a stress-test scenario) by 100%, that may be a real gain in robustness for all bitcoin users.  What I'm trying to suggest is that it seems to me that the coinwallet people were able to pull off what they pulled off far too cheaply.  I think making that kind of manourver more expensive could be a real asset to the network.
sr. member
Activity: 433
Merit: 267
@DumbFruit,

I think I understand the metaphor you're using and your analysis of the costs of increased transactions to the network as a whole vs the costs to the user of a network which is very expensive to send transactions on.  However, I think you're being a little bit extreme and that it's quite possible that a lot of the problems could be eliminated by taking a middle way.  Why shouldn't block size increase a little bit?  Pruning technology should eliminate a lot of the costs to the network which are parallel to your blood swelling analogy.  If we can prune blocks and set a dynamic limit on block sizes such that a target confirmation time on a well-formed tranasaction with a "standard" fee is acheived on average, wouldn't this fit with the way we do things w.r.t. difficulty?  Other aspects of the network have these dynamic controls, it seems to me that block-size could do this as well and we'd be on the way to worrying about something else.

Sure, it's a bit extreme. It's not like increasing the block size from 1MB to 2MB will usher in Armageddon. As others have pointed out many times, the progress of economies and technology could make it so a blockchain of a larger size in the future would be less burdensome on the network than the current blockchain today.

That's all well and good, the issue that I would like to stress is that in all scenarios a centralized agency will always be better equipped to handle large amounts of transactions quickly and cheaply. Bitcoin will never be able to out-compete them on that field.

So how does Bitcoin, and PoW cryptocurrency in general, compete?

1.) Fungible
2.) Anonymous
3.) Free Entry
4.) Trustless
5.) Irreversible
6.) Robust

Not

1.) Cheap
2.) Fast
3.) Arbitration

So I feel comfortable saying that there is always going to be centralized and uncentralized transactions. Rather than focusing on  trying to be a jack of all trades it would be better to focus on how a cryptocurrency can allow users to maneuver between these opposed feature sets. From that perspective I look at the 1MB block limit of Bitcoin and I say to myself, "Maybe smaller blocks might compete better in this space?"

TLDR:
Bigger isn't automatically better. Do bigger blocks really make Bitcoin more competitive?
legendary
Activity: 1456
Merit: 1083
I may write code in exchange for bitcoins.
@DumbFruit,

I think I understand the metaphor you're using and your analysis of the costs of increased transactions to the network as a whole vs the costs to the user of a network which is very expensive to send transactions on.  However, I think you're being a little bit extreme and that it's quite possible that a lot of the problems could be eliminated by taking a middle way.  Why shouldn't block size increase a little bit?  Pruning technology should eliminate a lot of the costs to the network which are parallel to your blood swelling analogy.  If we can prune blocks and set a dynamic limit on block sizes such that a target confirmation time on a well-formed tranasaction with a "standard" fee is acheived on average, wouldn't this fit with the way we do things w.r.t. difficulty?  Other aspects of the network have these dynamic controls, it seems to me that block-size could do this as well and we'd be on the way to worrying about something else.
sr. member
Activity: 433
Merit: 267
The human body contains a haphazard network of arteries and veins, from the large at the inner thighs and neck for instance, to the small capillaries that reach out delicately all the way to the tips of fingers and toes, getting as small as a handful of micrometers.
If the blood swells, the condition is called Macrocytosis. The network is effectively shorter, unable to reach the tiny capillaries. This leads to systemic damage as a whole; fatigue, tingling, dementia, and brain damage.

This is the way it is with any distributed system. As the difficulty to participate becomes more onerous the network atrophies.

As I've said before, the mechanism of centralization concerning larger blocks, is significantly different from the centralization we see when we are simply overloaded with transactions.

As blocks get bigger we guarantee a weaker decentralized system, ceteris paribus. Hosting a node simply becomes more onerous.

On the other hand, what are we looking at when blocks are totally filled? The Bitcoin network itself isn't damaged. It's not any more difficult to run a node. There aren't less transaction. There isn't less people with direct access to the blockchain (Though the sorts of people change). So what do we mean when we say that Bitcoin will centralize if there are more transactions being done than Bitcoin can handle? All it means is that a higher *proportion* of transactions are done off the chain as opposed to on it.

In the meantime, what drove the value of Bitcoin? What's driving the increase of transactions? In no small part it is the strength of the decentralized Bitcoin infrastructure... Which is directly damaged by an increase in block sizes.

So in the desire of absorbing more nutrients in the bloodstream, the doctor prescribes vodka to induce macrocytosis? Not only is that damaging to the person, it doesn't even help accomplish the objective, as even if we assume larger blood cells can absorb more nutrients, they wouldn't be able to reach where they need to go. (Stretching this metaphor pretty hilariously.)

There seems to be a large amount of people that believe that every Bitcoin transaction should be done between Bitcoin nodes, sure this is desirable, but is this even practical?
Suppose that Bitcoin were doing thousands of transactions per second, competing toe-to-toe with credit card companies like Visa. The only nodes that could afford to do this would be indistinguishable from the competition, except that they have added overhead of Proof of Work, and the costs of pseudo-decentralized infrastructure. To the end users this would mean slow, expensive, insecure transactions, that are irreversible.

Keep in mind that even right now, Coinbase offers offchain transactions, at any size, near immediately, and for free. It is physically impossible for a PoW cryptocurrency to compete with that.

The only conclusion I can come to is that large volume transactions should not be done on Bitcoin proper, and that the only achievable objective with today's technology is to engineer a method of graceful failure. We need a way to easily maneuver between highly decentralized, secure, low volume, slow, expensive transactions, to centralized, relatively insecure, high volume, fast, cheap transactions.

I don't see how bigger blocks, or even an algorithm for increasing block sizes, helps us reach that objective. Despite the overwhelming desire for bigger blocks.
legendary
Activity: 994
Merit: 1035
TierNolan's helical chains idea was a good proposal to achieve more equal opportunity mining.

Thank You. Very interesting. Seems like something that can be rolled out in a sidechain for testing.
This could increase P2P pools use, but this is only part of my concern as my concern also deals with centralization of ASIC manufacturing(mainly in china), and centralization or hardware. 21s' IoT miners or bitfurys' lightbulb miners could solve this but it is unclear if they will force mining to be conducted on their own pools or not.  
legendary
Activity: 3430
Merit: 3083
Quote from: Evan Mo, CEO of Huobi
The pool operators could actually make such a change themselves without proposing that the core developers do such a thing. Instead, we would like to express our views and concerns to the core developers, and let the community form a discussion rather than rudely cast a divergence. We are happy to see the consensus on the final improvement plan. After all, a 'forked' community is not what we are chasing after.”

It is great that these 5 mining pools are doing the right thing and trying to develop a consensus with developers but it is disturbing to realize 5 companies can completely decide Bitcoin's fate. We seriously need to work on decentralizing mining and hash power globally.

TierNolan's helical chains idea was a good proposal to achieve more equal opportunity mining.
legendary
Activity: 994
Merit: 1035
Quote from: Evan Mo, CEO of Huobi
The pool operators could actually make such a change themselves without proposing that the core developers do such a thing. Instead, we would like to express our views and concerns to the core developers, and let the community form a discussion rather than rudely cast a divergence. We are happy to see the consensus on the final improvement plan. After all, a 'forked' community is not what we are chasing after.”

It is great that these 5 mining pools are doing the right thing and trying to develop a consensus with developers but it is disturbing to realize 5 companies can completely decide Bitcoin's fate. We seriously need to work on decentralizing mining and hash power globally.
sr. member
Activity: 252
Merit: 250
the 60% does not makes double spend like the quata of 60% of mining does that no of Ths i mean if china such a developer of BTC why the governemet phrobits the coin no?
hero member
Activity: 706
Merit: 500
https://twitter.com/CryptoTrout
thats better than sticking to 1mb but i think venture capital wants to take bitcoin mainstream very quickly, so they want it larger
legendary
Activity: 3430
Merit: 3083
Core Dev also need to make a decision. they either:

a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11

b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.

Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another.

For me, this kind of observation leads me to think that 8MB might be a nice compromise between block size and storage.  You could also compromise on the pruning side, say 1GB instead of 500MB, that would be meeting in the middle, right?

I kind of agree with the comment from induktor, storage is cheap and set to get vastly cheaper and more capacious still. Bandwidth is getting faster, but the prices per unit are not falling at the rate storage is. The optimistic part of me says it won't matter in 5 years, mesh technology will be too easy, too cheap and too ubiquitous for this to matter. Alot else could happen given that scenario, though lol.
legendary
Activity: 1456
Merit: 1083
I may write code in exchange for bitcoins.
Core Dev also need to make a decision. they either:

a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11

b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.

Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another.

For me, this kind of observation leads me to think that 8MB might be a nice compromise between block size and storage.  You could also compromise on the pruning side, say 1GB instead of 500MB, that would be meeting in the middle, right?
hero member
Activity: 710
Merit: 502
IMHO, I am more concern about using 20MB blocks than have to store the full blockchain
nowadays HDD space is cheap, but bandwidth is still a problem in several countries like mine.

A typical upload speed here is 512Kb, a 1Mbit upload speed is fantastic here, and not very common.
so increase to 20MB could cause some problems, like the china letter claim, 8MB seems more reasonable I think.

To be honest I would prefer not to change anything, but i understand that something must to be done.
legendary
Activity: 3430
Merit: 3083
Core Dev also need to make a decision. they either:

a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11

b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.

Maybe the choosing will be between using the pruned database feature or the 20 MB block fork feature. Like so much with this debate, the answer is that you actually want both, and that these supposedly opposing ideas, in fact, support one another.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Those of us who want to see an increased block size limit are at a disadvantage because "doing nothing" achieves the same result as a consensus decision to keep the 1MB and seeing what happens to the Bitcoin ecosystem when confirmation times blow out from the 10 minute average, which everyone expects today.

Yet, Wladimir commented last month that he was "weakly against" making this change.

So, since then, we know there is a clear majority for this change on all user polls, lists of businesses and wallet providers, and now mining opinion.

Mike and Gavin didn't follow consensus procedures? Boo hoo, too bad. They knew that there was a lot of entrenched opinion against changing the 1MB (whether misguided about Satoshi's original vision or not), so they probably tried very long and very hard to obtain consensus among the github commit access developers. They failed, so rightly took all the arguments public, where they found overwhelming support for the change.

Core Dev need to ask themselves "If the 1MB limit did not exist, would it get any ACK to put it in place in an upcoming change (e.g. v0.11)?"
This is not a rhetorical question, this is a valid question to ask. I bet that this type of change, a blunt hard limit with unknown consequences would get zero support in Bitcoin Dev. There would be all sorts of objections about how it's a naive attempt to vaguely "increase fees", "stop spam", "slow the decline in full nodes" could be done far more effectively in far more elegant ways. Implementing the 1MB today would get an unanimous NACK.

Core Dev also need to make a decision. they either:

a) delay the v0.11 until it gets a scaling improvement which replaces the max block size constant (whether by a patch reflecting Jeffs' BIP 100, or a functionally comparable patch from Gavin). After all, Gavin gave notice 2 months ago that he wanted to submit a patch for v0.11

b) release v0.11 without the above, which is effectively a declaration that they are prepared to allow the 1MB limit to be maxed out (noticeably affecting user confirmation times) before considering to release a patch for it. This might not be "1MB 4EVR" but it is practically equivalent as far as the rest of us are concerned who want to see the limit raised/modified/improved/removed before the inevitable PR disaster from inaction.

We have heard Gregory's opinion loud and clear on Bitcointalk and Reddit, so what does Wladimir think today?
legendary
Activity: 1316
Merit: 1481
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
anyone remembering the Olympics???

Quote
The 2008 Summer Olympics opening ceremony was held at the Beijing National Stadium, also known as the Bird's Nest. It was began at 20:00 China Standard Time (UTC+8) on Friday, 8 August 2008, as the number 8 is considered to be auspicious.

 Roll Eyes Roll Eyes Roll Eyes
legendary
Activity: 1232
Merit: 1094
Latest as in 0.11? 0.10.x does not allow to run a node in pruning mode AFAIK.

Right, sorry, meant 0.11, so next release.

I was under the impression this would make it difficult to follow transactions from the begining?

Yes, but each node only needs to download it once and doesn't need to keep everything.

[Edit]

There is a suggestion on the mailing list for each node to store some of the blocks.

If everyone stored 1% of the blockchain, then you can download each block from lots of different nodes.  Once you are synced, you can prune your block store.
legendary
Activity: 1722
Merit: 1000
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node.  This is just kicking the can down the road.. we should not think of this as a fix to the problem.  We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.

They have added pruning to the latest release.  That reduces the amount of disk space required to run a full node.  It stores at least 288 blocks or 500MB, whichever is larger.  At 20MB per block, that is 5.7GB.

I was under the impression this would make it difficult to follow transactions from the begining?
copper member
Activity: 1498
Merit: 1562
No I dont escrow anymore.
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node.  This is just kicking the can down the road.. we should not think of this as a fix to the problem.  We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.

They have added pruning to the latest release.  That reduces the amount of disk space required to run a full node.  It stores at least 288 blocks or 500MB, whichever is larger.  At 20MB per block, that is 5.7GB.

Latest as in 0.11? 0.10.x does not allow to run a node in pruning mode AFAIK.
legendary
Activity: 1232
Merit: 1094
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node.  This is just kicking the can down the road.. we should not think of this as a fix to the problem.  We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.

They have added pruning to the latest release.  That reduces the amount of disk space required to run a full node.  It stores at least 288 blocks or 500MB, whichever is larger.  At 20MB per block, that is 5.7GB.
legendary
Activity: 1722
Merit: 1000
20MB is way to much of a jump.. the Blockchain is already very large which prevents people from running a full node.  This is just kicking the can down the road.. we should not think of this as a fix to the problem.  We are suppose to be better than the central banks.. I am on board for 8MB but after that we need to stop the idea of just kicking the problem to our children.
legendary
Activity: 3430
Merit: 3083
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?

Yes, yes it should, because 8MB is an arbitrary number just like 20MB. And just like 20MB, no one is predicting 8MB to be consumed any time soon and definitely not so quick that another consensus could not be reached to increase it. Other arbitrary bitcoin stuff: A block targeted every 10 minutes, block reward halved every 4 years, 50btc block reward yada yada yada. None of these things are set in stone, based on anything in particular or vitally important.

The 1 MB limit, and certainly the 10 minute discovery target, were objectively chosen, albeit still in a can-kicking guesswork category of design decisions. There are *ahem* several arbitrary numbers that wouldn't work in their place, so that description isn't very apt.
legendary
Activity: 1666
Merit: 1185
dogiecoin.com
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?

Yes, yes it should, because 8MB is an arbitrary number just like 20MB. And just like 20MB, no one is predicting 8MB to be consumed any time soon and definitely not so quick that another consensus could not be reached to increase it. Other arbitrary bitcoin stuff: A block targeted every 10 minutes, block reward halved every 4 years, 50btc block reward yada yada yada. None of these things are set in stone, based on anything in particular or vitally important.
sr. member
Activity: 266
Merit: 250
Personally, I think the hard limit should be removed completely. Remember the "640K is enough memory for everyone" quote? Look at where we are today. Technology advances at a very quick pace, and we'll have more than enough storage and bandwidth to handle the increases as adoption grows.
legendary
Activity: 1232
Merit: 1094
I think 8MB is just a compromise between 1MB and 20MB.

I think they are worried about the bandwidth between China and the rest of the world.  Very large blocks could cause problems for them.

There are a few different network simulators that give different results and it depends on what parameters you set.

They are concerned that pools outside China might produce large blocks and it will take longer for those blocks to reach them and that would mean they waste hashing power.  

Under some conditions they might benefit from lower bandwidth into China.  Assuming >50% of the hashing power is using Chinese pools and a Chinese pool and a non-Chinese pool both find a block at the same time, then the Chinese pool's block will reach a majority of the hashing power before the non-Chinese pool.  If the non-Chinese block is 20MB, then it would take even longer to enter China.

Mining farms and mining pools don't have to be at the same location.  It would be possible for miners in China to use mining pools outside China, if it ever became a problem.  This could shift the majority of the hashing power out of China, and then mining pools would have to leave China in order to have a good connection to the majority of the hashing power.
member
Activity: 99
Merit: 10
Why exactly 8 MB? Should Bitcoin really be dictated by archaic and irrational Chinese superstition? Or is there more substance to this number?
legendary
Activity: 1792
Merit: 1121
5 largest mining pools in China, including 2 of the busiest exchanges in the world, release a joint declaration to support raising the MAX_BLOCK_SIZE to 8MB. They currently control 60% of the network hashrate.

https://imgur.com/a/LlDRr

Chinese companies are operating under a very oppressive environment. All internet activities are strictly censored and outbound bandwidth is limited. They still agree to increase the block size.

Although one may argue that on this issue merchants' view is more important than miners, the hardfork won't be successful without miners' support. And don't forget, this statement includes 2 major exchanges, BTCChina and Huobi.

I hope this would conclude the debate around "raise block size or not" and "how much to raise".

I hope we could focus on the pathway leading to 8MB. Should that be a simple raise or a step function? If a step function, how? Should we limit other parameters, e.g. sigop count and UTXO growth?

The hard fork should also consider the pathway beyond 8MB if we don't want to repeat the debate (too early).
Jump to: