What do you mean by the bolded? I can't really figure out what you're saying. Are you suggesting that in a 75% hard fork scenario, that multiple surviving chains are impossible?
How is that hypocrisy? Why would we do a 2mb hard fork when segwit give us nearly that or more? If miners move forward with 2mb without Core -- and I think that is highly unlikely (I think you and others greatly overestimate the stupidity of large-scale miners) -- that would be their fault. It would be their fault for running a barely-tested version of bitcoin and expecting the rest of the ecosystem to do the same on very little notice, on the word of a ragtag minority of devs who have done nothing to suggest they are capable of maintaining bitcoin.
And you might just find that, in such a situation, a good deal of that hashpower makes its way back to the original (1mb limit) blockchain shortly before or after the hard fork is triggered.
a 70% trigger, just changes a 1000000 into 2000000... which sits as a buffer, miners can still make small blocks nothing will force miners to make blocks over 1mb until they personally choose to(which could be months or years, whenever they choose to). its not a nuclear trigger.. and just a buffer increase when the consensus shows that there is a possibility that capacity may grow soon. even after the 28 days are up, if miners think the risk of orphan is still high due to many other things. they wont push the envelope. and that 2000000 will just sit there as nothing more then a buffer
What are you talking about? You can't talk about "miners" as a single entity. A node is either running one version of the software or the other, assuming they are incompatible (in this case, the are). That means that after the hypothetical 28 days, if 70% are running node software that accepts > 1MB blocks, once any single miner or pool publishes a block that is valid based on 2mb parameters but not 1mb, we have passed the point of no return. "They won't push the envelope?" How could you pretend to predict the actions of every single CPU contributing hashpower to the network?
You've already predicted that 0% out of 100% of hashing power will publish a block breaking the old consensus rules -- that the new limit "is nothing more then a buffer," even if 70% ran the node software at some point in order to activate the new rules. On its face, that is extremely unlikely given that it only takes one actor with a modicum of hashing power to cause the node software of a majority of miners to enforce the new consensus rules.
So once Gavin's 28 days are up and any one miner or pool publishes a >1MB block, hashpower ceases to be the question at all. The question becomes which chain node operators consider valid.
Do you then go on to predict that 100% of nodes will be running one version of the software (1mb limit) or the other (2mb limit)? Because if not, we will inevitably have an irreparable chain fork.
If you want to know whether a hard fork activating with 70% of hashing power can break bitcoin into multiple blockchains (presumably forever, as too much value will have changed hands to conceivably "roll back")... the answer is unequivocally YES.
1. blockstream tell the doomsday scenario without putting in the context that orphans also happen. and without the context that they themselves, by not adding in the buffer, will be the cause of their said doomsday, if even at 25% minority they dont finally say "oh crap theres going to be issues so we must adapt".. but instead they try to hold strong and refuse to add a buffer even at 25% minority, THEY will be the cause of their own nodes to lag behind,
in shortt its safer to have the setting there as a buffer and not need to use it for X time.. then to wait for X time and still refuse to add the buffer
2. the hypocracy is they pretend the segwit will allow more capacity.. but within months they will fill in any gained data space of the main block, with their added new op codes, new variables and new things like confidential transactions which will add 250bytes+ of data to each transaction.. thus the capacity DECREASES as a result of their roadmap. so they are not the cure for capacity. especially if it takes a while for people to move to segwit, those late segwit adopters wont see the capacity advantage because confidential transactions will take it away again.
3. even if there was a 2mb setting. miners can still make 1mb blocks.. there is no harm in small blockers still making small blockers. but there is harm in nodes not allowing excess buffer to cope with change. EG 2mb is backward compatible as they will still accept small blocks.. but 1mb is not future proof and causes forks if things change and they have not adapted
4. you say:
" That means that after the hypothetical 28 days, if 70% are running node software that accepts > 1MB blocks,"
WRONG miners can receive anything from 0bytes to 2000bytes.. there is nothing that forcing miners to be over 1mb and never below 1mb, nothing forcing 2mb limit miners to reject under1mb blocks.
5. you say
"You've already predicted that 0% out of 100% of hashing power will publish a block breaking the old consensus rules -- that the new limit "is nothing more then a buffer," even if 70% ran the node software at some point in order to activate the new rules. On its face, that is extremely unlikely given that it only takes one actor with a modicum of hashing power to cause the node software of a majority of miners to enforce the new consensus rules."
if one miner done that? sorry but it wont happen.
one miner would need to make 700 out of 1000 blocks.. goodluck trying that.
secondly, even if the setting was activated.. the other dozen miners can still make small blocks nothing is forcing any miner to make a bigger block.. they decide as a human choice.. to add more transactions. how many transactions per block is no a consensus rule, its an individual preference
6. if there was a (unofficial) view of 50% of blocks are made by implementations that have 2mb buffer.. then blockstream should atleast have (unofficial) discussions to start getting their act together. remaining blind and not even discussing changes would be stupid on their part
by 60%+ they would need to have started(hopefully finished) coding an implementation with 2mb and before 70% make it publicly available. that way the setting is there before any OFFICIAL thresholds are hit. thus not causing problems for users.
but flatly refusing to even have the setting available to the community no matter what. is just blockstream being narrowminded