Pages:
Author

Topic: . - page 13. (Read 24690 times)

legendary
Activity: 2674
Merit: 2965
Terminated.
February 08, 2016, 04:50:05 AM
By the way, you know that pre-fork coins could also be sold off on majority-fork exchanges? Particularly because early adopters might be a little pissed off at the commit keys for bitcoin's dominant implementation being in the hands of a junior dev who wants to make the question of inflating the money supply a democratic one (jtoomim). How do you know who controls millions of pre-fork coins? You can be sure that I'll be dumping everything the second Toomim gets the keys to the kingdom, and I know several likeminded people.
This is the most likely outcome. Once we start selling everything and the price starts crashing, everyone will join in on it. I believe that If non-Core hard fork wins, major holders will sell BTC, driving price into the ground. 28113.50234684 Ƀ (84.89%)
sr. member
Activity: 400
Merit: 250
February 08, 2016, 04:42:07 AM
However, when you have the major hash power supporting an upgrade, then the miners of the minority chain in a hard fork can not mine without suffering huge loss on the mining income (too slow mining), similar to the minority miners in a soft fork suffering huge loss when all the mined blocks are orphaned.So hash power will give up the minority chain quickly thus basically achieve the same result: After a few days, no one would be able to extend that old chain in any meaningful time. This has happened on that "50 bitcoin forever" fork during the first reward halving

This, of course, assumes that nodes reconcile and agree upon a single, valid ledger. Even if a majority of hashing power supports Hard Fork A and a minority supports the Original Fork, the relative hashing power of the minority increases by the amount of hashing power that is mining Hard Fork A, since they are building on separate blockchains. For example:

Pre-fork:
Minority = 30% hashing power                                                  Majority = 70% hashing power

Post-fork:
30% minority = 3.333x relative hashing power (Original Fork)        70% majority = 1.429x relative hashing power (Hard Fork A)

In a hard fork situation where there is a significant disparity between the proportion of nodes enforcing different rule sets, it becomes a game of speculation for miners to choose to increase their relative hashing power and by a factor of how much? And the only way to choose is to judge -- based on node proportion -- which blockchain is likely to store value against their mining expenditure?

It's gonna be fun to watch -- I'll tell ya that. Tongue

I guess the minority chain will be end very quick, if they dare to setup an exchange, they will face a sell off of pre-fork coins counted in millions of bitcoin, the price of that coin will be crash to 0 in a couple of minutes. In fact I guess no exchange dare to setup an exchange for a minority chain coin. The incentive to mine the minority chain will almost be 0

Why would you assume that? The most important thing to consider here is that miners are working off incomplete information. They don't really know how many nodes are running what implementation as it's very easy to run fake nodes. And it's nodes -- not hashing power -- that determine the validity of a blockchain. It's a more diverse and interesting question than most realize. Miners are pretty centralized. I think this is why Gavin is targeting them: it's much easier to trick a small number of highly centralized mining pools than it is to trick thousands of node operators. And if the 2MB implementation is capable of triggering the rule change based on hashing power (at 75% or whatever bullshit "democratic" threshold Gavin & Co. come up with -- 51%, etc.), then everyone else will crumble in submission, right?

Well...the dozen nodes that I run won't. The definition of "majority" and "minority" chain can change in a heartbeat; that's just a matter of miners temporarily pointing their hashing power at one chain or the other. It doesn't matter what Coinbase and Bitstamp say now, or where Bitfury points its hashing power. What really matters are the nodes that determine block validity, and what proportion of them enforce the new fork's consensus rules. Because if a significant proportion of them enforce the old rules, we will have an irreparable chain fork. These irrelevant musings about how a majority of hashing power will render all other blockchains instantly dead are amusing but not very informative. If nodes do not approach consensus, miners will have to choose which fork to built on top of. But, which one? All of the Classic/XT rhetoric says that a temporary majority of hashing power will surely solve everything. But what the hell does that have to do with nodes? What proof do you have that Classic nodes will comprise a majority of nodes -- simply because Bitfury and a few mining pools upgraded (if that happens at all)? Well, if a majority of nodes continue to enforce the 1MB rule, you may find quickly that the "majority chain" isn't a very meaningful phrase. It's all about validity. Miners will point their hashing power at the longest, valid chain. If it isn't clear which one is the longest valid chain (due to no clear consensus among nodes), we will have multiple blockchains and this will be irreconcilable. IMO, the most likely outcome of that is for mining farms to shut down en masse and for difficulty adjustment to drop significantly, as miners cannot risk expending resources to build on potentially invalid blockchains. The market would likely never recover -- probably rightfully so. For this to happen would mean that the only mechanism to enforce rules within the bitcoin protocol was broken, and all it took was the prodding of a loud minority.

By the way, you know that pre-fork coins could also be sold off on majority-fork exchanges? Particularly because early adopters might be a little pissed off at the commit keys for bitcoin's dominant implementation being in the hands of a junior dev who wants to make the question of inflating the money supply a democratic one (jtoomim). How do you know who controls millions of pre-fork coins? You can be sure that I'll be dumping everything the second Toomim gets the keys to the kingdom, and I know several likeminded people.
legendary
Activity: 2674
Merit: 2965
Terminated.
February 08, 2016, 04:21:48 AM
You are aware that the soft fork also has activation requirements, right? Good luck seeing that in "April" (Those quote marks are sarcastic by the way.)
You are aware that in a soft fork clients that do not update remain functional, while in a HF this is not true? Of course you are.

everyone agrees that segwit adds necessary functionality, even if Classic supporters want it implemented after a hard fork. Segwit has massive support among the dev and miner communities, up to and including Classic devs. (Yes, it's a little pathetic that Toomim refuses to work on making Classic compatible and wants Core to do it for him, but that's a separate issue)
He barely knows how to apply necessary changes that Core has made. If he did, they wouldn't be going with 0.11.2.

Just heard that there is a consensus among chinese miners: They will not favor a change that moves transaction off-chain, since that will reduce their mining fee income 
"If you tell a lie big enough and keep repeating it, people will eventually come to believe it." Keep trying.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
February 08, 2016, 03:42:44 AM
However, when you have the major hash power supporting an upgrade, then the miners of the minority chain in a hard fork can not mine without suffering huge loss on the mining income (too slow mining), similar to the minority miners in a soft fork suffering huge loss when all the mined blocks are orphaned.So hash power will give up the minority chain quickly thus basically achieve the same result: After a few days, no one would be able to extend that old chain in any meaningful time. This has happened on that "50 bitcoin forever" fork during the first reward halving

This, of course, assumes that nodes reconcile and agree upon a single, valid ledger. Even if a majority of hashing power supports Hard Fork A and a minority supports the Original Fork, the relative hashing power of the minority increases by the amount of hashing power that is mining Hard Fork A, since they are building on separate blockchains. For example:

Pre-fork:
Minority = 30% hashing power                                                  Majority = 70% hashing power

Post-fork:
30% minority = 3.333x relative hashing power (Original Fork)        70% majority = 1.429x relative hashing power (Hard Fork A)

In a hard fork situation where there is a significant disparity between the proportion of nodes enforcing different rule sets, it becomes a game of speculation for miners to choose to increase their relative hashing power and by a factor of how much? And the only way to choose is to judge -- based on node proportion -- which blockchain is likely to store value against their mining expenditure?

It's gonna be fun to watch -- I'll tell ya that. Tongue

I guess the minority chain will be end very quick, if they dare to setup an exchange, they will face a sell off of pre-fork coins counted in millions of bitcoin, the price of that coin will be crash to 0 in a couple of minutes. In fact I guess no exchange dare to setup an exchange for a minority chain coin. The incentive to mine the minority chain will almost be 0
full member
Activity: 167
Merit: 100
February 08, 2016, 03:21:07 AM
Just heard that there is a consensus among chinese miners: They will not favor a change that moves transaction off-chain, since that will reduce their mining fee income  

This is reasonable, since miners provide the value and security of the network, so they deserve to be rewarded for each transaction pass through the network. Any off-chain solution reduce miner's fee income and siphon value out of the service they provided

If true, it makes complete sense. Imagine what bitcoin owners would say if the maximum number of bitcoins was doubled by a hard fork. They bought their bitcoins with the expectation that there was a hard limit to the number of bitcoins and that's the "deal" they feel they signed up for.

The same goes for miners and transaction fees. If the bitcoin architecture starts moving transactions out of the blockchain or causes the miners to get lower and lower fees over time for more and more work, that's not the "deal" they feel they signed up for.

sr. member
Activity: 400
Merit: 250
February 08, 2016, 03:19:03 AM
You seem to be again fundamentally misunderstanding what it means to run incompatible versions of software. It doesn't matter what you think rational miners will do. Once the 750 of 1000 blocks are found, there is no going back. It only takes a modicum of hashing power to start publishing blocks that are incompatible with 1MB nodes. This is not about "pushing Blockstream to get their act together." It's about avoiding the risk of breaking bitcoin forever.

and that "modicum of hashing power to start publishing blocks that are incompatible with 1MB nodes" wont be accepted by 1mb blockers and 1mb blockers wont stale their attempts. they would carry on hashing their own blocks and make blocks a few seconds later.. eventually even if it takes several blocks when 1mb gain height, they cause the C big blocks to orphan off..

Yes, 1MB nodes won't recognize a > 1MB block as valid. It doesn't follow that 1MB miners will make blocks "a few seconds later," especially if they have a relative minority of hashing power. Again, if you are suggesting that 70% majority of hashing power cannot find 3 blocks in a row, you are completely wrong. Here is an example from tonight:

miners wont risk it at 70%.. yes the setting will be active but miners wont push for more than 1mb at such low levels as the orphan risk is still apparent.
they would wait for a higher number.. and just treat the 2mb setting as an unused buffer for the future. when they are comfortable,

and when that time comes (im guessing 90%). then and only then would the small miners not be able to catch up to cause orphans and the small miners should upgrade or be left behind.

which they should have done earlier as they had enough warning

If [rational] miners won't risk 70%, why the hell are we activating new consensus rules at 75%? You keep talking about miners like a single entity that you can predict. Again, it doesn't matter that you believe miners will use it as an "unused buffer" (whatever the hell that means). Gavin's code activates the rule change at 75% of mined blocks; after that, any CPU contributing hashing power can determine whether there is a chain fork based on the node software it runs:

You can't talk about "miners" as a single entity. A node is either running one version of the software or the other, assuming they are incompatible (in this case, the are). That means that after the hypothetical 28 days, if 70% are running node software that accepts > 1MB blocks, once any single miner or pool publishes a block that is valid based on 2mb parameters but not 1mb, we have passed the point of no return. "They won't push the envelope?" How could you pretend to predict the actions of every single CPU contributing hashpower to the network?

You've already predicted that 0% out of 100% of hashing power will publish a block breaking the old consensus rules -- that the new limit "is nothing more then a buffer," even if 70% ran the node software at some point in order to activate the new rules. On its face, that is extremely unlikely given that it only takes one actor with a modicum of hashing power to cause the node software of a majority of miners to enforce the new consensus rules.

So once Gavin's 28 days are up and any one miner or pool publishes a >1MB block, hashpower ceases to be the question at all. The question becomes which chain node operators consider valid.

Do you then go on to predict that 100% of nodes will be running one version of the software (1mb limit) or the other (2mb limit)? Because if not, we will inevitably have an irreparable chain fork.

If you want to know whether a hard fork activating with 70% of hashing power can break bitcoin into multiple blockchains (presumably forever, as too much value will have changed hands to conceivably "roll back")... the answer is unequivocally YES.
legendary
Activity: 4214
Merit: 4458
February 08, 2016, 02:53:01 AM
You seem to be again fundamentally misunderstanding what it means to run incompatible versions of software. It doesn't matter what you think rational miners will do. Once the 750 of 1000 blocks are found, there is no going back. It only takes a modicum of hashing power to start publishing blocks that are incompatible with 1MB nodes. This is not about "pushing Blockstream to get their act together." It's about avoiding the risk of breaking bitcoin forever.

and that "modicum of hashing power to start publishing blocks that are incompatible with 1MB nodes" wont be accepted by 1mb blockers and 1mb blockers wont stale their attempts. they would carry on hashing their own blocks and make blocks a few seconds later.. eventually even if it takes several blocks when 1mb gain height, they cause the C big blocks to orphan off..

miners wont risk it at 70%.. yes the setting will be active but miners wont push for more than 1mb at such low levels as the orphan risk is still apparent.
they would wait for a higher number.. and just treat the 2mb setting as an unused buffer for the future. when they are comfortable,

and when that time comes (im guessing 90%). then and only then would the small miners not be able to catch up to cause orphans and the small miners should upgrade or be left behind.

which they should have done earlier as they had enough warning
sr. member
Activity: 400
Merit: 250
February 08, 2016, 02:44:26 AM
and thats only the assumption that the C miners ALWAYS and FOREVER miner faster.. (which you failed to take into the equation, processing time. propogation time, etc.)

No, it only needs to happen once in order to break consensus into multiple blockchains. Not always and forever. Also, are you suggesting that propagation time will prevent a 70% hashing majority from finding 3 blocks in a row?

but with all that said my point is that there is a risk of orphans. and so miners are not going to risk adding more data to blocks and risk the extra few milliseconds of propagation time by doing so. if their HARDER work gets orphaned.

so even at 70% magic number.. miners wont automatically push forwarder making their life harder and riskier.. they would wait until conditions are better and that extra limit will just sit as a buffer for when things are BETTER THAN THE MINIMUM CONSENSUS.

again. they wont jump forward as soon as the minimum consensus is reached. its too risky.. the minimum consensus is just a wet fish slap in blockstreams face to finally get their act together if they havnt already, because its a signal that miners may soon start pushing harder. its not a signal for miners to force the issue. just a signal that blockstream should do something because miners may move forward

You seem to be again fundamentally misunderstanding what it means to run incompatible versions of software. It doesn't matter what you think rational miners will do. Once the 750 of 1000 blocks are found, there is no going back. It only takes a modicum of hashing power to start publishing blocks that are incompatible with 1MB nodes. This is not about "pushing Blockstream to get their act together." It's about avoiding the risk of breaking bitcoin forever.
legendary
Activity: 4214
Merit: 4458
February 08, 2016, 02:37:37 AM

This is all complete nonsense. Are you really suggesting that nodes with differing consensus rules would "re-align" with one another? Why? There is nothing enforcing consensus between 2mb and 1mb nodes.

The picture is silly -- consider another common situation, post-rules activation:

Group C (2MB miners) finds and publishes Blocks 400,000c, 400,001c and 400,002c (all > 1MB) at 12:05-12:15, making it the longest valid chain by 3 blocks for all nodes enforcing 2MB rules. Groups A or B (1MB miners) finds and publishes Block 400,000a at 12:16.

Now, why would nodes enforcing 2MB rules orphan any of Blocks 400,000c-400,002c? They are valid blocks built on the longest valid chain by timestamp. Block 400,000a was found well after Block 400,000c. There is no reason for 2MB nodes to "re-align" with Groups A or B. To 2MB nodes, Block 400,000a is a stale block and nothing more. 2MB nodes will continue to recognize blocks built on top of 400,002c as valid.

Meanwhile, Blocks 400,000c-400,002c break the consensus rules enforced by 1MB nodes. So for them, the next valid block on the longest valid chain is 400,000a. 1MB nodes will continue to recognize blocks built on top of 400,000a as valid.

When you break consensus, there is no incentive for nodes with different rules to "re-align" their blockchains. Miners running one set of rules or the other will continue to build on the longest valid chain. If a different chain is the longest but is built on top of invalid blocks, they cannot be re-aligned. Miners running incompatible software will continue to mine on top of separate chains, and so on, and so forth.

Having the 2mb setting "as a buffer" does nothing to avoid orphans or headaches; it could help to trigger a hard fork that results in multiple blockchains built on top of differing consensus rules.

maybe worth you looking how orphans work
then realise that 2mb miners can accept blocks below 1mb aswell..its a 0 to 2000000... not a 1000001 to 2000000 rule

so at the 400,002 where the <1mb block gets a block solved first.. the 1mb block wins because it solved a block first (wins the block height race)
(i wrote it twice for emphasis)

and when it sees the chain of previous blocks of that winning block.. they do not contain any over1mb blocks.. so then that causes C to orphan the chain to realign..

please learn about orphans

Please don't tell people to learn how things work when nearly everything you've said is factually incorrect.

Sure, 2MB nodes can accept blocks smaller than 1MB. So what? That a 900kb block found 3 blocks later is valid doesn't mean nodes will orphan the 3 valid blocks that came before it based on block height.

Why would the 1MB block (according to 2MB nodes) win the block height race? Group C mined Blocks 400,000-400,002 before Group A mined Block 400,000. 2MB nodes recognize Blocks 400,000-400,002 as valid. Why would they possibly orphan them? Because some other software version disagrees? LOL.

Group C won't orphan any blocks because they have built on the longest valid chain, according to their 2MB rules and according to block height. And that's how a chain fork occurs.


and thats only the assumption that the C miners ALWAYS and FOREVER miner faster.. (which you failed to take into the equation, processing time. propagation time, etc.)

but with all that said my point is that there is a risk of orphans. and so miners are not going to risk adding more data to blocks and risk the extra few milliseconds of propagation time by doing so. if their HARDER work gets orphaned, even if there is a setting to allow them to work harder, they wont do it unless they know its safe to.

so even at 70% magic number.. miners wont automatically push forward making their life harder and riskier.. they would wait until conditions are better and that extra limit will just sit as a buffer for when things are BETTER THAN THE MINIMUM CONSENSUS.

again. they wont jump forward as soon as the minimum consensus is reached. its too risky.. the minimum consensus is just a wet fish slap in blockstreams face to finally get their act together if they havnt already, because its a signal that miners may soon start pushing harder. its not a signal for miners to force the issue. just a signal that blockstream should do something because miners may move forward.

again
i would say that 50% is a amber warning for blockstream to do something.(human decision to make some code) and 70% is a flashing red light. and even after the 28 days if blockstream have ignored the warnings. then miners still wont automatically jump forward that instant. they will wait till its safer(im presuming a 90% safety decision to push forward.). so if and when miners finally do move forward, the only people to blame are blockstream for being left behind because at that point they would have lost any considerable hashpower to be able to keep up with the blockheight and so that would be the fault of blockstream for not acting on the warnings.
sr. member
Activity: 400
Merit: 250
February 08, 2016, 02:27:19 AM

This is all complete nonsense. Are you really suggesting that nodes with differing consensus rules would "re-align" with one another? Why? There is nothing enforcing consensus between 2mb and 1mb nodes.

The picture is silly -- consider another common situation, post-rules activation:

Group C (2MB miners) finds and publishes Blocks 400,000c, 400,001c and 400,002c (all > 1MB) at 12:05-12:15, making it the longest valid chain by 3 blocks for all nodes enforcing 2MB rules. Groups A or B (1MB miners) finds and publishes Block 400,000a at 12:16.

Now, why would nodes enforcing 2MB rules orphan any of Blocks 400,000c-400,002c? They are valid blocks built on the longest valid chain by timestamp. Block 400,000a was found well after Block 400,000c. There is no reason for 2MB nodes to "re-align" with Groups A or B. To 2MB nodes, Block 400,000a is a stale block and nothing more. 2MB nodes will continue to recognize blocks built on top of 400,002c as valid.

Meanwhile, Blocks 400,000c-400,002c break the consensus rules enforced by 1MB nodes. So for them, the next valid block on the longest valid chain is 400,000a. 1MB nodes will continue to recognize blocks built on top of 400,000a as valid.

When you break consensus, there is no incentive for nodes with different rules to "re-align" their blockchains. Miners running one set of rules or the other will continue to build on the longest valid chain. If a different chain is the longest but is built on top of invalid blocks, they cannot be re-aligned. Miners running incompatible software will continue to mine on top of separate chains, and so on, and so forth.

Having the 2mb setting "as a buffer" does nothing to avoid orphans or headaches; it could help to trigger a hard fork that results in multiple blockchains built on top of differing consensus rules.

maybe worth you looking how orphans work
then realise that 2mb miners can accept blocks below 1mb aswell..its a 0 to 2000000... not a 1000001 to 2000000 rule

so at the 400,002 where the <1mb block gets a block solved first.. the 1mb block wins because it solved a block first (wins the block height race)
(i wrote it twice for emphasis)

and when it sees the chain of previous blocks of that winning block.. they do not contain any over1mb blocks.. so then that causes C to orphan the chain to realign..

please learn about orphans

Please don't tell people to learn how things work when nearly everything you've said is factually incorrect.

Sure, 2MB nodes can accept blocks smaller than 1MB. So what? That a 900kb block found 3 blocks later is valid doesn't mean nodes will orphan the 3 valid blocks that came before it based on block height.

Why would the 1MB block (according to 2MB nodes) win the block height race? Group C mined Blocks 400,000-400,002 before Group A mined Block 400,000. 2MB nodes recognize Blocks 400,000-400,002 as valid. Why would they possibly orphan them? Because some other software version disagrees? LOL.

Group C won't orphan any blocks because they have built on the longest valid chain, according to their 2MB rules and according to block height. And that's how a chain fork occurs.

in your scenario, miners A and B will ignore and instantly reject 400,000c as it doesnt fit the rules.. miners A and B do not stale their attempts, but instead continue to mine their own blocks until they reach a solution that fits them. which is the timestamps
400,000a at 12:05:05
400,001b at 12:15:15
400,001a at 12:25:15

please learn orphans and stales and how they work and choose when to give up and when to keep trying.

oh and by the way the more transactions you add to a block, the more processing time it takes. so hash for hash.. the chances of a miner with 2mb rule hitting first is reduced just because of processing time. so the chances of 1mb rules hitting first is more.

Yes, 1MB nodes (Groups A and B) will instantly reject 400,000c. That doesn't magically mean that Groups A and B instantly find blocks as fast as Group C. The premise is simple -- if Group C finds 3 blocks (or even 2 blocks) before Groups A and B find 1 block, what happens?

And throwing latency time into the mix as you are is completely ridiculous. You're merely arguing that since something very slightly reduces the chances of a chain fork (and miners generally mitigate this reduction by SPV mining), that it is therefore safe? Are you kidding?


even in a case where miner C makes 20 blocks in a row first. and the other 1mb miners ignore that and mine there own. second later height for height.
when that minority miner does get blockheight win... then the C miners 20 blocks get orphaned. because C miner still treats <1mb blocks as valid because the rule is 0 to 2000000.. not 1000001 to 2000000

Why does it matter that  <1mb blocks are valid? That doesn't make all the blocks Group C mined before invalid. Explain to me: Why would Group C (and 2MB nodes) orphan all blocks found by Group C? They are building on the longest valid chain. Their blocks are valid according to consensus rules and earliest based on block height. Why would they be orphaned? It doesn't matter if another "valid" block was found later by Group A -- that's just a stale block.
legendary
Activity: 4214
Merit: 4458
February 08, 2016, 02:12:29 AM

This is all complete nonsense. Are you really suggesting that nodes with differing consensus rules would "re-align" with one another? Why? There is nothing enforcing consensus between 2mb and 1mb nodes.

When you break consensus, there is no incentive for nodes with different rules to "re-align" their blockchains. Miners running one set of rules or the other will continue to build on the longest valid chain. If a different chain is the longest but is built on top of invalid blocks, they cannot be re-aligned. Miners running incompatible software will continue to mine on top of separate chains, and so on, and so forth.

Having the 2mb setting "as a buffer" does nothing to avoid orphans or headaches; it could help to trigger a hard fork that results in multiple blockchains built on top of differing consensus rules.

maybe worth you looking how orphans work
then realise that 2mb miners can accept blocks below 1mb aswell..its a 0 to 2000000... not a 1000001 to 2000000 rule

so at the 400,002 where the <1mb block gets solved first.. the 1mb block wins because it solved a block first (wins the block height race) and is also valid because even the C miner accepts it as a valid block
(i wrote it twice for emphasis)

and when it sees the chain of previous blocks of that winning block.. they do not contain any over1mb blocks.. so then that causes C to orphan large blocks to realign..

please learn about orphans

Quote
Group C (2MB miners) finds and publishes Blocks 400,000c, 400,001c and 400,002c (all > 1MB) at 12:05-12:15, making it the longest valid chain by 3 blocks for all nodes enforcing 2MB rules. Groups A or B (1MB miners) finds and publishes Block 400,000a at 12:16.

Now, why would nodes enforcing 2MB rules orphan any of Blocks 400,000c-400,002c? They are valid blocks built on the longest valid chain by timestamp. Block 400,000a was found well after Block 400,000c. There is no reason for 2MB nodes to "re-align" with Groups A or B. To 2MB nodes, Block 400,000a is a stale block and nothing more. 2MB nodes will continue to recognize blocks built on top of 400,002c as valid.

Meanwhile, Blocks 400,000c-400,002c break the consensus rules enforced by 1MB nodes. So for them, the next valid block on the longest valid chain is 400,000a. 1MB nodes will continue to recognize blocks built on top of 400,000a as valid.

in your scenario, miners A and B will ignore and instantly reject 400,000c as it doesnt fit the rules.. miners A and B do not stale their attempts, but instead continue to mine their own blocks until they reach a solution that fits them. which is the timestamps
400,000a at 12:05:05
400,001b at 12:15:15
400,001a at 12:25:05

please learn orphans and stales and how they work and choose when to give up and when to keep trying.

oh and by the way the more transactions you add to a block, the more processing time it takes. so hash for hash.. the chances of a miner with 2mb rule hitting first is reduced just because of processing time. so the chances of 1mb rules hitting first is more.

even in a case where miner C makes 20 blocks in a row first. and the other 1mb miners ignore that and mine there own. second later height for height.
when that minority miner does get blockheight win... then the C miners 20 blocks get orphaned. because C miner still treats <1mb blocks as valid because the rule is 0 to 2000000.. not 1000001 to 2000000
sr. member
Activity: 400
Merit: 250
February 08, 2016, 02:06:39 AM
However, when you have the major hash power supporting an upgrade, then the miners of the minority chain in a hard fork can not mine without suffering huge loss on the mining income (too slow mining), similar to the minority miners in a soft fork suffering huge loss when all the mined blocks are orphaned.So hash power will give up the minority chain quickly thus basically achieve the same result: After a few days, no one would be able to extend that old chain in any meaningful time. This has happened on that "50 bitcoin forever" fork during the first reward halving

This, of course, assumes that nodes reconcile and agree upon a single, valid ledger. Even if a majority of hashing power supports Hard Fork A and a minority supports the Original Fork, the relative hashing power of the minority increases by the amount of hashing power that is mining Hard Fork A, since they are building on separate blockchains. For example:

Pre-fork:
Minority = 30% hashing power                                                  Majority = 70% hashing power

Post-fork:
30% minority = 3.333x relative hashing power (Original Fork)        70% majority = 1.429x relative hashing power (Hard Fork A)

In a hard fork situation where there is a significant disparity between the proportion of nodes enforcing different rule sets, it becomes a game of speculation for miners to choose to increase their relative hashing power and by a factor of how much? And the only way to choose is to judge -- based on node proportion -- which blockchain is likely to store value against their mining expenditure?

It's gonna be fun to watch -- I'll tell ya that. Tongue
sr. member
Activity: 400
Merit: 250
February 08, 2016, 01:42:59 AM
Again, you are misunderstanding the basics of my argument. The argument is that 70% + of miners would activate the rule change by mining 750 of the last 1000 blocks. After 28 days have passed, the possibility of a hard fork becomes possible. Once the threshold has been activated and the proposed 28 days grace period has passed, yes, it takes one miner mining one block to hard fork the protocol based on the new 2MB rules. The rest is up to how much of the network is comprised by nodes running the 1MB vs 2MB rules. If 100% of nodes are running the 2MB rules, there is no risk of multiple surviving chain forks. If 50% of nodes are running 2MB rules, there is virtually 100% chance of multiple surviving chain forks.

ok imagine this
its a month after the 75%.. and it is block height 400,000
then "one miner with a small modicum of relative hashpower to produce a single block that violates the 1MB limit" decides to make block 400,001 with 1.1mb
guess what
if the minority miners (under1mb) dont stale their attempt as they reject the 1.1mb block and so the minority miner makes its own 400,001 of 0.9mb

next the minority miner makes 400,002 and because it came first and does not hold any blocks over 1mb.. when the miner that makes the bigger block receives it..

it says "oh crap i dont have the same blockheaders, lets ophan off the large block and get the valid blockheight version of the chain.
here is some animation because some people like pretty pictures and dont understand words

A= <1mb miner B=<1mb miner C=<2mb miner


2mb implementations can orphan their large blocks and rejoin the small blocks if small blockers mine a block first(at any time).. which means if small blockers have 20% chance of solving a block first.. then the orphan rate would be 80% because eventually the chain would rejoin to be small blocks and 80% of the large blocks would get thrown out when the chains re-align... in short, some blocks that have upto 4 confirmations could then be thrown out..
yet those on the small block chain dont get orphans.

but the flip side is that if the small blockers never solve a block first, then not only are they wasting hashpower that doesnt result in a block they can spend. but also they wont get to sync and they will be left behind. which means that small blockers would then give in and upgrade or be left wasting their own time.

which is something they should be prepared to do long before miners decide its time to do more than 1mb.

so largeblockers wont push forward until they are absolutely sure their attempts wont get orphaned.. even if the 2mb setting is active it will just sit there until the time is right.. even way after the 28 days have passed.

so although there is a proposal for 70-95% (whatever magic number no one agrees on).. people should start to seriously think about having the buffer of 2mb well before that magic number. even if it takes 2 years for miners to finally add more transactions. having the 2mb setting as a buffer atleast causes less chance of orphans or headaches, when that time finally comes.

nothing forces a miner to push out more than 1mb after consensus magic number has been reached.. its not in the code to generate blocks over 1mb.. and so its miners choice to do so when they think they can handle it and they are sure it wont end up as orphan at sometime..
(i repeated for emphasis)

also where i was saying it is a buffer in other post is because 1mb limit has been a buffer setting while miners were making 0-500k blocks in 2009-2013 and 0.2mb-0.95mb from 2013-2016, yet miners were not forced to only make 0.99mb blocks.. were they?!?

miners can no matter what the blocklimit setting is, .. independently choose how many transactions they want to include. so even if it is a month after 75% or 95% whatever.. miners can still make sub 1mb blocks and just have the 2mb limit sat there, without issue

This is all complete nonsense. Are you really suggesting that nodes with differing consensus rules would "re-align" with one another? Why? There is nothing enforcing consensus between 2mb and 1mb nodes.

The picture is silly -- consider another common situation, post-rules activation:

Quote
Group C (2MB miners) finds and publishes Blocks 400,000c, 400,001c and 400,002c (all > 1MB) at 12:05-12:15, making it the longest valid chain by 3 blocks for all nodes enforcing 2MB rules. Groups A or B (1MB miners) finds and publishes Block 400,000a at 12:16.

Now, why would nodes enforcing 2MB rules orphan any of Blocks 400,000c-400,002c? They are valid blocks built on the longest valid chain by timestamp. Block 400,000a was found well after Block 400,000c. There is no reason for 2MB nodes to "re-align" with Groups A or B. To 2MB nodes, Block 400,000a is a stale block and nothing more. 2MB nodes will continue to recognize blocks built on top of 400,002c as valid.

Meanwhile, Blocks 400,000c-400,002c break the consensus rules enforced by 1MB nodes. So for them, the next valid block on the longest valid chain is 400,000a. 1MB nodes will continue to recognize blocks built on top of 400,000a as valid.

When you break consensus, there is no incentive for nodes with different rules to "re-align" their blockchains. Miners running one set of rules or the other will continue to build on the longest valid chain. If a different chain is the longest but is built on top of invalid blocks, they cannot be re-aligned. Miners running incompatible software will continue to mine on top of separate chains, and so on, and so forth.

Having the 2mb setting "as a buffer" does nothing to avoid orphans or headaches; it could help to trigger a hard fork that results in multiple blockchains built on top of differing consensus rules.
hero member
Activity: 709
Merit: 501
February 08, 2016, 01:29:52 AM
Just heard that there is a consensus among chinese miners: They will not favor a change that moves transaction off-chain, since that will reduce their mining fee income  

This is reasonable, since miners provide the value and security of the network, so they deserve to be rewarded for each transaction pass through the network. Any off-chain solution reduce miner's fee income and siphon value out of the service they provided
Sweet.  Greed is an awesome motivator.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
February 08, 2016, 12:53:46 AM
Just heard that there is a consensus among chinese miners: They will not favor a change that moves transaction off-chain, since that will reduce their mining fee income  

This is reasonable, since miners provide the value and security of the network, so they deserve to be rewarded for each transaction pass through the network. Any off-chain solution reduce miner's fee income and siphon value out of the service they provided
legendary
Activity: 4214
Merit: 4458
February 08, 2016, 12:45:08 AM
Again, you are misunderstanding the basics of my argument. The argument is that 70% + of miners would activate the rule change by mining 750 of the last 1000 blocks. After 28 days have passed, the possibility of a hard fork becomes possible. Once the threshold has been activated and the proposed 28 days grace period has passed, yes, it takes one miner mining one block to hard fork the protocol based on the new 2MB rules. The rest is up to how much of the network is comprised by nodes running the 1MB vs 2MB rules. If 100% of nodes are running the 2MB rules, there is no risk of multiple surviving chain forks. If 50% of nodes are running 2MB rules, there is virtually 100% chance of multiple surviving chain forks.

ok imagine this
its a month after the 75%.. and it is block height 400,000
then "one miner with a small modicum of relative hashpower to produce a single block that violates the 1MB limit" decides to make block 400,001 with 1.1mb
guess what
if the minority miners (under1mb) dont stale their attempt as they reject the 1.1mb block and so the minority miner makes its own 400,001 of 0.9mb

next the minority miner makes 400,002 and because it came first and does not hold any blocks over 1mb.. when the miner that makes the bigger block receives it..

it says "oh crap i dont have the same blockheaders, lets ophan off the large block and get the valid blockheight version of the chain.
here is some animation because some people like pretty pictures and dont understand words

A= <1mb miner B=<1mb miner C=<2mb miner



2mb implementations can orphan their large blocks and rejoin the small blocks if small blockers mine a block first(at any time).. which means if small blockers have 20% chance of solving a block first.. then the orphan rate would be 80% because eventually the chain would rejoin to be small blocks and 80% of the  blocks(large) would get thrown out when the chains re-align... in short, some blocks that have upto 4 confirmations could then be thrown out..
yet those on the small block chain dont get orphans.

but the flip side is that if the small blockers never solve a block first, then not only are they wasting hashpower that doesnt result in a block they can spend. but also they wont get to sync and they will be left behind. which means that small blockers would then give in and upgrade or be left wasting their own time.

which is something they should be prepared to do long before miners decide its time to do more than 1mb.

so largeblockers wont push forward until they are absolutely sure their attempts wont get orphaned.. even if the 2mb setting is active it will just sit there until the time is right.. even way after the 28 days have passed.


so although there is a proposal for 70-95% (whatever magic number no one agrees on).. people should start to seriously think about having the buffer of 2mb well before that magic number. even if it takes 2 years for miners to finally add more transactions. having the 2mb setting as a buffer atleast causes less chance of orphans or headaches, when that time finally comes.

nothing forces a miner to push out more than 1mb after consensus magic number has been reached.. its not in the code to generate blocks over 1mb.. and so its miners choice to do so when they think they can handle it and they are sure it wont end up as orphan at sometime..
(i repeated for emphasis)

also where i was saying it is a buffer in other post is because 1mb limit has been a buffer setting while miners were making 0-500k blocks in 2009-2013 and 0.2mb-0.95mb from 2013-2016, yet miners were not forced to only make 0.99mb blocks.. were they?!?

miners can no matter what the blocklimit setting is, .. independently choose how many transactions they want to include. so even if it is a month after 75% or 95% whatever.. miners can still make sub 1mb blocks and just have the 2mb limit sat there, without issue
sr. member
Activity: 400
Merit: 250
February 08, 2016, 12:28:57 AM
Good luck gaining consensus (albeit 75% isn't consensus) for the HF before April. I'll push veto myself.

You are aware that the soft fork also has activation requirements, right? Good luck seeing that in "April" (Those quote marks are sarcastic by the way.)

Pretty much everyone agrees that segwit adds necessary functionality, even if Classic supporters want it implemented after a hard fork. Segwit has massive support among the dev and miner communities, up to and including Classic devs. (Yes, it's a little pathetic that Toomim refuses to work on making Classic compatible and wants Core to do it for him, but that's a separate issue)
legendary
Activity: 1260
Merit: 1115
February 08, 2016, 12:24:01 AM
Quote
In the highly unlikely event that Classic achieved consensus, I believe that Core would implement a fork to make Core compatible.

This is some comfort at least.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
February 08, 2016, 12:21:43 AM
I made a more complete list of two types of fork scenario:

Hard fork is a loosening of the rules
If minority of hash power upgraded to new version, then new blocks will be orphaned
If majority of hash power upgraded to new version, then there will be two incompatible chains (like march 2013 fork)

Soft fork is a tightening of the rules
If minority of hash power upgraded to new version, then there will be two incompatible chains (like July 04 fork)
If majoriy of hash power upgraded to new version, then old blocks will be orphaned

So both upgrade will possibly fork into two incompatible chains depends on situation. But if you always make the upgrade when majority of hash power is supporting the upgrade, then soft fork can make sure there will not be two incompatible chains, while hard fork will create two incompatible chains at first

However, when you have the major hash power supporting an upgrade, then the miners of the minority chain in a hard fork can not mine without suffering huge loss on the mining income (too slow mining), similar to the minority miners in a soft fork suffering huge loss when all the mined blocks are orphaned. So hash power will give up the minority chain quickly thus basically achieve the same result: After a few days, no one would be able to extend that old chain in any meaningful time. This has happened on that "50 bitcoin forever" fork during the first reward halving
sr. member
Activity: 400
Merit: 250
February 08, 2016, 12:14:08 AM

What do you mean by the bolded? I can't really figure out what you're saying. Are you suggesting that in a 75% hard fork scenario, that multiple surviving chains are impossible?

How is that hypocrisy? Why would we do a 2mb hard fork when segwit give us nearly that or more? If miners move forward with 2mb without Core -- and I think that is highly unlikely (I think you and others greatly overestimate the stupidity of large-scale miners) -- that would be their fault. It would be their fault for running a barely-tested version of bitcoin and expecting the rest of the ecosystem to do the same on very little notice, on the word of a ragtag minority of devs who have done nothing to suggest they are capable of maintaining bitcoin.

And you might just find that, in such a situation, a good deal of that hashpower makes its way back to the original (1mb limit) blockchain shortly before or after the hard fork is triggered.

a 70% trigger, just changes a 1000000 into 2000000... which sits as a buffer, miners can still make small blocks nothing will force miners to make blocks over 1mb until they personally choose to(which could be months or years, whenever they choose to). its not a nuclear trigger.. and just a buffer increase when the consensus shows that there is a possibility that capacity may grow soon. even after the 28 days are up, if miners think the risk of orphan is still high due to many other things. they wont push the envelope. and that 2000000 will just sit there as nothing more then a buffer

What are you talking about? You can't talk about "miners" as a single entity. A node is either running one version of the software or the other, assuming they are incompatible (in this case, the are). That means that after the hypothetical 28 days, if 70% are running node software that accepts > 1MB blocks, once any single miner or pool publishes a block that is valid based on 2mb parameters but not 1mb, we have passed the point of no return. "They won't push the envelope?" How could you pretend to predict the actions of every single CPU contributing hashpower to the network?

You've already predicted that 0% out of 100% of hashing power will publish a block breaking the old consensus rules -- that the new limit "is nothing more then a buffer," even if 70% ran the node software at some point in order to activate the new rules. On its face, that is extremely unlikely given that it only takes one actor with a modicum of hashing power to cause the node software of a majority of miners to enforce the new consensus rules.

So once Gavin's 28 days are up and any one miner or pool publishes a >1MB block, hashpower ceases to be the question at all. The question becomes which chain node operators consider valid.

Do you then go on to predict that 100% of nodes will be running one version of the software (1mb limit) or the other (2mb limit)? Because if not, we will inevitably have an irreparable chain fork.

If you want to know whether a hard fork activating with 70% of hashing power can break bitcoin into multiple blockchains (presumably forever, as too much value will have changed hands to conceivably "roll back")... the answer is unequivocally YES.

1. blockstream tell the doomsday scenario without putting in the context that orphans also happen. and without the context that they themselves, by not adding in the buffer, will be the cause of their said doomsday, if even at 25% minority they dont finally say "oh crap theres going to be issues so we must adapt".. but instead they try to hold strong and refuse to add a buffer even at 25% minority, THEY will be the cause of their own nodes to lag behind,

in shortt its safer to have the setting there as a buffer and not need to use it for X time.. then to wait for X time and still refuse to add the buffer

2. the hypocracy is they pretend the segwit will allow more capacity.. but within months they will fill in any gained data space of the main block, with their added new op codes, new variables and new things like confidential transactions which will add 250bytes+ of data to each transaction.. thus the capacity DECREASES as a result of their roadmap. so they are not the cure for capacity. especially if it takes a while for people to move to segwit, those late segwit adopters wont see the capacity advantage because confidential transactions will take it away again.

3. even if there was a 2mb setting. miners can still make 1mb blocks.. there is no harm in small blockers still making small blockers. but there is harm in nodes not allowing excess buffer to cope with change. EG 2mb is backward compatible as they will still accept small blocks.. but 1mb is not future proof and causes forks if things change and they have not adapted

You make a lot of long-winded responses but they don't seem to convey a basic understanding of the protocol. Comparing everyday orphans to an intentional chain fork based on incompatible software is ridiculous. Whether the minority is "the cause of their own nodes to lag behind" does not address the fact that such a contentious hard fork could break bitcoin into multiple blockchains forever. You are assigning blame; I don't care who is to blame. I'm talking about the reality of a contentious hard fork resulting in multiple surviving blockchains.

Confidential transactions are a separate issue entirely. Firstly, it's incumbent on you to prove with data exactly how this will negatively affect capacity. More importantly, whether future features of the protocol add load to the system isn't relevant to the current question of capacity now: hard fork to 2mb or segwit? This is the current choice, and hard forking to 2mb cannot be argued to give much more capacity than segwit at all.

It doesn't matter if miners can produce 1MB blocks. If they are running node software with a 2MB limit, then if any miner produces a > 1MB block, it will chain fork. It doesn't matter if the vast majority of miners do not produce > 1MB blocks. All it takes is is one miner with a small modicum of relative hashpower to produce a single block that violates the 1MB limit. This talk of a "buffer" is meaningless -- it only takes one 2MB block to be mined to show all your "miners can still make 1mb blocks" talk to be nonsense. Once a single 2MB block is mined, nodes will begin building two disparate chains.

Quote
WRONG miners can receive anything from 0bytes to 2000bytes.. there is nothing that forcing miners to be over 1mb and never below 1mb, nothing forcing 2mb limit miners to reject under1mb blocks.

It's 100% meaningless to say "miners aren't forced to accept big blocks" -- the operative issue is whether "nodes are forced to accept big blocks" and running a node with code for a 2MB block limit means the answer to that is YES. No one is "forced" to mine 2MB blocks anymore than they are "forced to mine 1MB blocks." That is not a safeguard from anything. The issue is that if any miner does produce a single block > 1MB, nodes running 1MB software will reject it as invalid and nodes running 2MB software will accept it, giving birth to a separate, surviving blockchain with different consensus rules. If you can't understand that, then I'm sorry, but you have some basic misunderstandings of the protocol.

Quote
if one miner done that? sorry but it wont happen.
one miner would need to make 700 out of 1000 blocks.. goodluck trying that.
secondly, even if the setting was activated.. the other dozen miners can still make small blocks nothing is forcing any miner to make a bigger block.. they decide as a human choice.. to add more transactions. how many transactions per block is no a consensus rule, its an individual preference

Again, you are misunderstanding the basics of my argument. The argument is that 70% + of miners would activate the rule change by mining 750 of the last 1000 blocks. After 28 days have passed, the possibility of a hard fork becomes possible. Once the threshold has been activated and the proposed 28 days grace period has passed, yes, it takes one miner mining one block to hard fork the protocol based on the new 2MB rules. The rest is up to how much of the network is comprised by nodes running the 1MB vs 2MB rules. If 100% of nodes are running the 2MB rules, there is no risk of multiple surviving chain forks. If 50% of nodes are running 2MB rules, there is virtually 100% chance of multiple surviving chain forks.

The size of a block is a consensus rule, whether that rule is 1MB or 2MB. It is not an individual preference to enforce the consensus rules. If a miner produces an otherwise valid block that is > 1MB, all 1MB nodes will reject it as invalid, hence creating a separate blockchain considered valid by those enforcing the 2MB rules.

The real fun begins when people realize that it's not about temporarily convincing miners to mine on their preferred fork, as if the dominant fork will be decided in the 10 minutes following the hard fork. It's not about the hashing power -- that only decides when a fork can be triggered. After that, miners can only follow node operators. Well, what happens when 60% of nodes are running Classic and 40% are running Core, or vice versa? How about 50-50? I'll tell you what happens. Shit hits the fan for those that said a contentious hard fork is not a risk to bitcoin.

Quote
6. if there was a (unofficial) view of 50% of blocks are made by implementations that have 2mb buffer.. then blockstream should atleast have (unofficial) discussions to start getting their act together. remaining blind and not even discussing changes would be stupid on their part

by 60%+ they would need to have started(hopefully finished) coding an implementation with 2mb and before 70% make it publicly available. that way the setting is there before any OFFICIAL thresholds are hit. thus not causing problems for users.

but flatly refusing to even have the setting available to the community no matter what. is just blockstream being narrowminded

Blockstream does not control commit access to Core. That ad hominem is getting old. Coding an emergency increase to 2MB can be done in a couple days or less. If Classic actually approached consensus thresholds -- 90% + -- upping the limit so that Core nodes enforce the consensus rules at the appropriate time would be very simple and could be done quickly.

But there is no reason for them to acknowledge a 75% trigger as consensus. That is nothing "official." It's a number made up by Gavin. Whether you want to take the literal definition of consensus or the historical definition as it has applied to bitcoin forks, 75% is laughable. That trigger is literally made up from nothing other than transforming the definition of "consensus" into "democracy" -- there is zero precedent for it. Core devs have no responsibility to force Core node operators to submit to the will of the majority. In the highly unlikely event that Classic achieved consensus, I believe that Core would implement a fork to make Core compatible. Gavin would never allow his forks to activate at 95%, because he knows they have no chance in hell of achieving that. That doesn't mean it is incumbent on Core to acknowledge this re-definition of consensus as democracy. Quite the opposite, in fact. And their resolve to remain true to the consensus mechanism in the face of these never-ending political attacks is commendable.
Pages:
Jump to: