What do you mean by the bolded? I can't really figure out what you're saying. Are you suggesting that in a 75% hard fork scenario, that multiple surviving chains are impossible?
How is that hypocrisy? Why would we do a 2mb hard fork when segwit give us nearly that or more? If miners move forward with 2mb without Core -- and I think that is highly unlikely (I think you and others greatly overestimate the stupidity of large-scale miners) -- that would be their fault. It would be their fault for running a barely-tested version of bitcoin and expecting the rest of the ecosystem to do the same on very little notice, on the word of a ragtag minority of devs who have done nothing to suggest they are capable of maintaining bitcoin.
And you might just find that, in such a situation, a good deal of that hashpower makes its way back to the original (1mb limit) blockchain shortly before or after the hard fork is triggered.
a 70% trigger, just changes a 1000000 into 2000000... which sits as a buffer, miners can still make small blocks nothing will force miners to make blocks over 1mb until they personally choose to(which could be months or years, whenever they choose to). its not a nuclear trigger.. and just a buffer increase when the consensus shows that there is a possibility that capacity may grow soon. even after the 28 days are up, if miners think the risk of orphan is still high due to many other things. they wont push the envelope. and that 2000000 will just sit there as nothing more then a buffer
What are you talking about? You can't talk about "miners" as a single entity. A node is either running one version of the software or the other, assuming they are incompatible (in this case, the are). That means that after the hypothetical 28 days, if 70% are running node software that accepts > 1MB blocks, once any single miner or pool publishes a block that is valid based on 2mb parameters but not 1mb, we have passed the point of no return. "They won't push the envelope?" How could you pretend to predict the actions of every single CPU contributing hashpower to the network?
You've already predicted that 0% out of 100% of hashing power will publish a block breaking the old consensus rules -- that the new limit "is nothing more then a buffer," even if 70% ran the node software at some point in order to activate the new rules. On its face, that is extremely unlikely given that it only takes one actor with a modicum of hashing power to cause the node software of a majority of miners to enforce the new consensus rules.
So once Gavin's 28 days are up and any one miner or pool publishes a >1MB block, hashpower ceases to be the question at all. The question becomes which chain node operators consider valid.
Do you then go on to predict that 100% of nodes will be running one version of the software (1mb limit) or the other (2mb limit)? Because if not, we will inevitably have an irreparable chain fork.
If you want to know whether a hard fork activating with 70% of hashing power can break bitcoin into multiple blockchains (presumably forever, as too much value will have changed hands to conceivably "roll back")... the answer is unequivocally YES.
1. blockstream tell the doomsday scenario without putting in the context that orphans also happen. and without the context that they themselves, by not adding in the buffer, will be the cause of their said doomsday, if even at 25% minority they dont finally say "oh crap theres going to be issues so we must adapt".. but instead they try to hold strong and refuse to add a buffer even at 25% minority, THEY will be the cause of their own nodes to lag behind,
in shortt its safer to have the setting there as a buffer and not need to use it for X time.. then to wait for X time and still refuse to add the buffer
2. the hypocracy is they pretend the segwit will allow more capacity.. but within months they will fill in any gained data space of the main block, with their added new op codes, new variables and new things like confidential transactions which will add 250bytes+ of data to each transaction.. thus the capacity DECREASES as a result of their roadmap. so they are not the cure for capacity. especially if it takes a while for people to move to segwit, those late segwit adopters wont see the capacity advantage because confidential transactions will take it away again.
3. even if there was a 2mb setting. miners can still make 1mb blocks.. there is no harm in small blockers still making small blockers. but there is harm in nodes not allowing excess buffer to cope with change. EG 2mb is backward compatible as they will still accept small blocks.. but 1mb is not future proof and causes forks if things change and they have not adapted
You make a lot of long-winded responses but they don't seem to convey a basic understanding of the protocol. Comparing everyday orphans to an intentional chain fork based on incompatible software is ridiculous. Whether the minority is "the cause of their own nodes to lag behind" does not address the fact that such a contentious hard fork could break bitcoin into multiple blockchains forever. You are assigning blame; I don't care who is to blame. I'm talking about the reality of a contentious hard fork resulting in multiple surviving blockchains.
Confidential transactions are a separate issue entirely. Firstly, it's incumbent on you to prove with data exactly how this will negatively affect capacity. More importantly, whether future features of the protocol add load to the system isn't relevant to the current question of capacity now: hard fork to 2mb or segwit? This is the current choice, and hard forking to 2mb cannot be argued to give much more capacity than segwit at all.
It doesn't matter if miners
can produce 1MB blocks. If they are running node software with a 2MB limit, then if any miner produces a > 1MB block, it will chain fork. It doesn't matter if the vast majority of miners do not produce > 1MB blocks. All it takes is is one miner with a small modicum of relative hashpower to produce a single block that violates the 1MB limit. This talk of a "buffer" is meaningless -- it only takes one 2MB block to be mined to show all your "miners can still make 1mb blocks" talk to be nonsense. Once a single 2MB block is mined, nodes will begin building two disparate chains.
WRONG miners can receive anything from 0bytes to 2000bytes.. there is nothing that forcing miners to be over 1mb and never below 1mb, nothing forcing 2mb limit miners to reject under1mb blocks.
It's 100% meaningless to say "miners aren't forced to accept big blocks" -- the operative issue is whether "nodes are forced to accept big blocks" and running a node with code for a 2MB block limit means the answer to that is YES. No one is "forced" to mine 2MB blocks anymore than they are "forced to mine 1MB blocks." That is not a safeguard from anything. The issue is that
if any miner does produce a single block > 1MB, nodes running 1MB software will reject it as invalid and nodes running 2MB software will accept it, giving birth to a separate, surviving blockchain with different consensus rules. If you can't understand that, then I'm sorry, but you have some basic misunderstandings of the protocol.
if one miner done that? sorry but it wont happen.
one miner would need to make 700 out of 1000 blocks.. goodluck trying that.
secondly, even if the setting was activated.. the other dozen miners can still make small blocks nothing is forcing any miner to make a bigger block.. they decide as a human choice.. to add more transactions. how many transactions per block is no a consensus rule, its an individual preference
Again, you are misunderstanding the basics of my argument. The argument is that 70% + of miners would activate the rule change by mining 750 of the last 1000 blocks. After 28 days have passed, the possibility of a hard fork becomes possible.
Once the threshold has been activated and the proposed 28 days grace period has passed, yes, it takes one miner mining one block to hard fork the protocol based on the new 2MB rules. The rest is up to how much of the network is comprised by nodes running the 1MB vs 2MB rules. If 100% of nodes are running the 2MB rules, there is no risk of multiple surviving chain forks. If 50% of nodes are running 2MB rules, there is virtually 100% chance of multiple surviving chain forks.
The size of a block
is a consensus rule, whether that rule is 1MB or 2MB. It is not an individual preference to
enforce the consensus rules. If a miner produces an otherwise valid block that is > 1MB, all 1MB nodes will reject it as invalid, hence creating a separate blockchain considered valid by those enforcing the 2MB rules.
The real fun begins when people realize that it's not about temporarily convincing miners to mine on their preferred fork, as if the dominant fork will be decided in the 10 minutes following the hard fork. It's not about the hashing power -- that only decides when a fork can be triggered. After that, miners can only follow node operators. Well, what happens when 60% of nodes are running Classic and 40% are running Core, or vice versa? How about 50-50? I'll tell you what happens. Shit hits the fan for those that said a contentious hard fork is not a risk to bitcoin.
6. if there was a (unofficial) view of 50% of blocks are made by implementations that have 2mb buffer.. then blockstream should atleast have (unofficial) discussions to start getting their act together. remaining blind and not even discussing changes would be stupid on their part
by 60%+ they would need to have started(hopefully finished) coding an implementation with 2mb and before 70% make it publicly available. that way the setting is there before any OFFICIAL thresholds are hit. thus not causing problems for users.
but flatly refusing to even have the setting available to the community no matter what. is just blockstream being narrowminded
Blockstream does not control commit access to Core. That ad hominem is getting old. Coding an emergency increase to 2MB can be done in a couple days or less. If Classic actually approached consensus thresholds -- 90% + -- upping the limit so that Core nodes enforce the consensus rules at the appropriate time would be very simple and could be done quickly.
But there is no reason for them to acknowledge a 75% trigger as consensus. That is nothing "official." It's a number made up by Gavin. Whether you want to take the literal definition of consensus or the historical definition as it has applied to bitcoin forks, 75% is laughable. That trigger is literally made up from nothing other than transforming the definition of "consensus" into "democracy" -- there is zero precedent for it. Core devs have no responsibility to force Core node operators to submit to the will of the majority. In the highly unlikely event that Classic achieved consensus, I believe that Core would implement a fork to make Core compatible. Gavin would never allow his forks to activate at 95%, because he knows they have no chance in hell of achieving that. That doesn't mean it is incumbent on Core to acknowledge this re-definition of consensus as democracy. Quite the opposite, in fact. And their resolve to remain true to the consensus mechanism in the face of these never-ending political attacks is commendable.