Pages:
Author

Topic: The real disastor that could happen (forking Bitcoin)... (Read 4886 times)

legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
My understanding is that the Chinese miners are now not going to back Classic so it is thankfully REKT and therefore I feel no need to continue this (now thankfully unnecessary) topic.

I am going to go back to focusing on working in this field rather than trying to herd cats.
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
And imbues my focus with hysteria?   uhm, ok  Roll Eyes

As compared to you attempting to ridicule me for pointing out that this particular analysis is inapplicable to the topic under discussion? Certainly.

Quote
Meanwhile, anyone who actually gives a damn about understanding the issue will still have access to the link I posted.

Except, of course, that the linked material is irrelevant to 'the issue'.

eta:
Quote
I can only assume you didn't read the link to OrganOfCorti's blog I posted, since he demonstrates quite rigorously how that can, in fact, happen.

I can only assume you did not read it in entirety, as the comments therein have OrganOfCorti admitting that his analysis is off in its conclusions. By roughly two orders of magnitude. And contains an unfulfilled statement that it will be updated to fix this error.

I can only assume you did not read the followon info, which shows that it would take at least six years for any discernible chance that the 75% fork would false trigger at any less than 70% actual.

But by all means, keep repeating the same inapplicable statement.
legendary
Activity: 4424
Merit: 4794
ok alot of people are discussing the consensus, the debate about separate chains and not many seem to be using logic.

so here goes


A.
this scenario is where just one miner with the most hash power compared to any other single miner.
although it appear the miner has 27% of hash power, effectively because it is adding more transactions the processing time is longer so the 'feel' of the hashing speed is more like 24%, thus its not going to be racing ahead making longer chains and only has 1 turn on average every 4 blocks to win.
thus with 90% consensus against it.. that 1 block out of 4 would get orphaned because the other 90% would ignore it and the next block they add wont have the over1mb block in its chain.

so there are 2 defenses, not enough hashpower to race ahead with over 1mb blockheights, and not enough peer consensus (nodes running same implementation).

B.
this is where there is a hashrate power of over 75%. but even in this case there is only 40% peer consensus, and so with all 10 miners connecting together to show each other a solved block. 6 out of 10 would orphan off that over1mb block. a bit more complicated to explain, but basically the end result is that eventually there is still just 1 chain of the under 1mb blocks, once the miners communicate and sort out the orphans.

C.
this is where there is 75% of peer consensus where the hashrate is over 90%. in this case the majority wins on both counts and the minority (under1mb) gets left behind with a chain that does not sync up.

remember. 2mb rule allows both under 1mb blocks and over1mb blocks.. so even if a 2mb consensus has not been reached the 2mb implementations would still happily accept the valid under1mb rules.. there would not be any hard forks as orphaning will rejoin any disputing blocks.

however old 1mb implementations, if they lose dominance in both counts. will end up with:
1. having a chain that doesnt sync up because it rejects the majority of blocks
2. the amount of peer nodes with an altnative chain would be limited.
3. miners who continue to mine in the minority wont be able to spend the efforts as its rejected.
4. would have to upgrade to regain sync ability and earning potential.

but as i say. this is only a concern for those that dont upgrade to a 2mb implementation. and only then if scenario C came into play.

so there is no issue in users having a 2mb implementation, where the setting just sits there as a buffer, doing nothing until scenario C comes about.
what is more disasterous. is that bitcoin-core refuses to code a 2mb implementation which means if a scenario C came about.. the entire community would be left behind with unsync-able nodes.

so just as a line of safety for the possibility of something happening in the next 3 years, its far better to allow users to have the setting there, just as a buffer.. than to refuse to give people the buffer and scream that the disaster is someone elses fault.

as i said having the setting for users, is not a call to arms. its not a red nuclear button. its just a setting that sits there as a buffer.. just like the 1mb limit was just a buffer while miners were making only 0-500k blocks in 2009-2013.

miners logic and psychology shows and proves that they wont risk adding more transactions (increasing processing time) if they think there blocks will get orphaned. and so they are not going to jump the gun. its not in their financial interest to do so unless the community is ready to accept their blocks, so they can atleast spend their rewards.
and anyone trying to imply nefarious intent of some miners.. ill refer you to scenario A, and possibly B(if collusion and naferious intent are combined). but in general long term reward earnings outweight possibility of short term nefarious intent. so relax

now onto bitcoin-core
we all know, bitcoin-core is a full node client. anyone interested in being a full node in april will need to upgrade to remain full node status, due to segwit. otherwise they are automatically causing their non-upgraded clients to become "compatible" nodes. not full validation nodes.

so knowing full nodes will be upgrading just to stay full node status. i think the April update should also include the 2mb setting buffer. that way even 3 years later or 6 months.. whenever the miners feel comfortable to be scenario C.. it wont impact the community because the community is already prepared.

remember the 2mb rule is not a rule that says "you have to accept blocks over 1mb". its not a nuclear launchcode of bloat.
its a buffer setting of "anything from 0b to 2mb" which includes normal blocks as they stand today.
again just like the 1mb was a buffer while miners were happy below 500k a few years back

so while miners stick safely to their 1mb rule the community can happily be prepared for 2mb possibilities of the future.
if bitcoin-core refuses to prepare the community before miners act.. then it would be bitcoin-cores fault for users not being able to sync once consensus has been reached.

ill just leave this image here for Lauda

legendary
Activity: 1066
Merit: 1098
Meanwhile, anyone who actually gives a damn about understanding the issue will still have access to the link I posted.

Many understands the statistic thats why conservative value of 75% is choosen so you cant claim the change is tiggered by minority, or to be exact, there is real chance of this happening by variance.


At anything less than 70% of steady hashrate, triggering a fork would take at least 6 years, and gets exponentially less likely as miner share decreases.


I can only assume you didn't read the link to OrganOfCorti's blog I posted, since he demonstrates quite rigorously how that can, in fact, happen.

http://organofcorti.blogspot.com/2015/08/bip101-implementation-flaws.html
sr. member
Activity: 294
Merit: 250
Meanwhile, anyone who actually gives a damn about understanding the issue will still have access to the link I posted.

Many understands the statistic thats why conservative value of 75% is choosen so you cant claim the change is tiggered by minority, or to be exact, there is real chance of this happening by variance.


At anything less than 70% of steady hashrate, triggering a fork would take at least 6 years, and gets exponentially less likely as miner share decreases.
legendary
Activity: 3430
Merit: 1142
Ιntergalactic Conciliator
I will add to this a real Bank run to the exchanges and cloud wallet like Coinbase and blockchain.info. The media will be real happy to describe that their corrupt fiat world has the same effects and to future system like bitcoin.
After that bank run drama i am very sure that the media again will told to all the global citizen the reason why we must keep the banking system and a central control governance..
some points more and what will happen if this fork is near.

1. Bank run from Cloud wallets and exchanges
2. Collapse of mining for a small or big period of time
3. collapse of Bitcoin value
4. After this disaster i am very sure the core developers will leave the project and Bitcoin will remain to the hand of a small developer group.
5. Core developer will trasnform the old bitcoin to a new mining bitcoin system without asic
6. After that will we have two bitcoin version with each 21 millions coin without any value at all or with a penny value of each of them especially for the first months
7. The node will transform in few data centers because no one will want to keep running a blockchain system with so much huge space
8. Collapse of the decentralized Bitcoin system and beginning of central control digital money.

In my opinion developers like Gavin Andresen is great minds to create code but crap minds to understand how works complex economical systems.
Is in our hand guys if we destroy the first social global experiment..
legendary
Activity: 1066
Merit: 1098
OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria.
Dude, I posted a link.  Who's the hysterical one here again?

A link, coupled with a statement implying that the material in the link was applicable to the particular 75% fork threshold being discussed, and that it necessarily indicated a "problem":

Quote
If you are really interested in why 75% is not enough, the definitive answer...

And imbues my focus with hysteria?   uhm, ok  Roll Eyes

You obviously are not interested in any kind of a conversation here, so I will just leave you to what you were doing...

Meanwhile, anyone who actually gives a damn about understanding the issue will still have access to the link I posted.

legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria.
Dude, I posted a link.  Who's the hysterical one here again?

A link, coupled with a statement implying that the material in the link was applicable to the particular 75% fork threshold being discussed, and that it necessarily indicated a 'problem':

Quote
If you are really interested in why 75% is not enough, the definitive answer...
legendary
Activity: 1066
Merit: 1098
OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria.

Dude, I posted a link.  Who's the hysterical one here again?

legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
If you are really interested in why 75% is not enough, the definitive answer is found here:

http://organofcorti.blogspot.com/2015/08/bip101-implementation-flaws.html

Because the stated 75% criteria produces a negligible yet discernible chance that we get a false trigger at an actual 67% adoption rate due to variance? Yawn. Troll harder.

Uhm, no.  He shows pretty conclusively that

Quote
5. Summary
As it stands, the BIP101 has implementation flaws that could cause BIP101 activation with a significantly sub-supermajority, or (in the presence of fake BIP101 voters) a minority. It is almost certain that if BIP101 is activated, it will be with a sub-supermajority, or even a minority.

It also allows true proportions of fake voters to be sufficiently low that it becomes quite possible for one large mining pool or a couple of smaller ones in collusion contributing fake BIP101 votes to cause premature BIP101 activation.

Emphasis is mine.  If you want to present math that disproves OrganofCorti's, feel free.  Calling me a troll for pointing out OrganOfCorti's excellent blog post is just childish.

It is not so much the math, but interpreting what it means in real terms. OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria. Shall we analyze this together?

The damning part of the analysis is specific to BIP 101, and assumes 'vote spoofing' upon the part of nefarious actors. Such is fine, and important in regards to analyzing the BIP 101 situation. However, you present it as if it is  universally-applicable to any 75% proposal. 'Flaw #3' is inapplicable to any situation where the tabulation of 75% is strictly based upon hash power.

So yes, it seems to me that you are trolling.

Even so, there is an admitted latent issue with the order of magnitude in the comments, unaddressed for months.

Quote
Anonymous31 August 2015 at 03:46
> The number of failure attempts before a success occurs in trials of this type is called a geometrically distributed random variable, and can be used to find the probability of some arbitrary true proportion resulting in more than 749 blocks of a sequential 1,000, after that true proportion has been present for some number of blocks.

This is incorrect, as overlapping sequences are extremely correlated. Treating overlapping sequences as independent trials will massively overestimate the chances of success. The expected time for a 0.7 proportion of hashrate to result in a 0.75 proportion of blocks is closer to 300,000.

http://bitco.in/forum/threads/triggering-the-bip101-fork-early-with-less-than-75-miners.13/

Reply

Organ Ofcorti31 August 2015 at 16:18
Yes and I feel a bit silly about missing that! I realised it after a redditor commented: https://www.reddit.com/r/Bitcoin/comments/3ilwq1/bip101_implementation_flaws/cuhy71q

I'll be posting an update after I get the weekly stats out. I haven't had time to figure out an analytical approach, but I'll generate some nice plots based on simulations.

Or more importantly, the amount of time for variance to result in a false trigger is important. The other analysis in the quoted comment above puts the chance as "TL;DR: At anything less than 70% of steady hashrate, triggering a fork would take at least 6 years, and gets exponentially less likely as miner share decreases."

Even the bulk of Core devs are seeming to claim that it will be necessary in the not-too-distant future to increase the block size. Just not now. For some unstated reason. If variance results in a trigger after Core would have already increased the block size anyhow, then the trigger is a non-event. No chain fork results. So what?
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
Yes, they can. And they must process more than 1MB of data in order to validate what they are calling a 1MB block. The difference in quantity of data, of course, being the amount of data in the signatures.
That was what I initially said. This depends on the exact quantity of data though, just how much more are we talking about here?
So tell me again why this does requirement for more data not lead to node centralization, when many SegWit boosters (perhaps not yourself - I can't keep track) rely upon 'increasing block size to 2MB will lead to node centralization' as one of their strongest arguments? Or at least as their only argument for their claim that a simple block size increase is unsafe?
I don't rely on that argument but I have surely used it a number of times (can't recall all the discussions). I'm just not aware of an estimated factor of increase and have chosen to ignore this information until I have one. Has someone done the math?

In Wuillie's presentation at Scaling Bitcoin Hong Kong, I believe* he stated that by looking at recent transactions, the current scaling factor would seem to be about 1.75x. Or that signature data is slightly less than half the block data.

*my memory is at times faulty. But I'll stick to a claim that some prominent Core dev stated that analysis of recent blocks led to this figure.

Others I have seen use a 4x figure - totally dependent upon an assumption that multisig become a much larger portion of the transaction volume.

For this little branch of the discussion, the salient point is that any node that performs validation (i.e. any node operating in a trustless manner) must process not the claimed 1MB block size worth of data, but an amount of data that reflects the signature data as well - 1MB multiplied by this (instantaneous) scaling factor, be it 1.75x, 4x, or whatever else represents the proportion of signature data associated with the transactions included in that block.
legendary
Activity: 1066
Merit: 1098
If you are really interested in why 75% is not enough, the definitive answer is found here:

http://organofcorti.blogspot.com/2015/08/bip101-implementation-flaws.html

Because the stated 75% criteria produces a negligible yet discernible chance that we get a false trigger at an actual 67% adoption rate due to variance? Yawn. Troll harder.

Uhm, no.  He shows pretty conclusively that

Quote
5. Summary
As it stands, the BIP101 has implementation flaws that could cause BIP101 activation with a significantly sub-supermajority, or (in the presence of fake BIP101 voters) a minority. It is almost certain that if BIP101 is activated, it will be with a sub-supermajority, or even a minority.

It also allows true proportions of fake voters to be sufficiently low that it becomes quite possible for one large mining pool or a couple of smaller ones in collusion contributing fake BIP101 votes to cause premature BIP101 activation.

Emphasis is mine.  If you want to present math that disproves OrganofCorti's, feel free.  Calling me a troll for pointing out OrganOfCorti's excellent blog post is just childish.

legendary
Activity: 2674
Merit: 3000
Terminated.
Yes, they can. And they must process more than 1MB of data in order to validate what they are calling a 1MB block. The difference in quantity of data, of course, being the amount of data in the signatures.
That was what I initially said. This depends on the exact quantity of data though, just how much more are we talking about here?
So tell me again why this does requirement for more data not lead to node centralization, when many SegWit boosters (perhaps not yourself - I can't keep track) rely upon 'increasing block size to 2MB will lead to node centralization' as one of their strongest arguments? Or at least as their only argument for their claim that a simple block size increase is unsafe?
I don't rely on that argument but I have surely used it a number of times (can't recall all the discussions). I'm just not aware of an estimated factor of increase and have chosen to ignore this information until I have one. Has someone done the math?
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
If you are really interested in why 75% is not enough, the definitive answer is found here:

http://organofcorti.blogspot.com/2015/08/bip101-implementation-flaws.html

Because the stated 75% criteria produces a negligible yet discernible chance that we get a false trigger at an actual 67% adoption rate due to variance? Yawn. Troll harder.
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
(bold not in original)

Please define what you mean by 'upgraded client'.
A client that supports Segwit after the activation occurs. Those clients can download and validate the data.

Yes, they can. And they must process more than 1MB of data in order to validate what they are calling a 1MB block. The difference in quantity of data, of course, being the amount of data in the signatures.

So tell me again why this does requirement for more data not lead to node centralization, when many SegWit boosters (perhaps not yourself - I can't keep track) rely upon 'increasing block size to 2MB will lead to node centralization' as one of their strongest arguments? Or at least as their only argument for their claim that a simple block size increase is unsafe?
legendary
Activity: 1066
Merit: 1098
At start, Satoshi choose 32MB as consensus rule, then changed it to 1 MB consensus rule, now it can be changed to 2 MB if majority chooses. If you dont want, your free to use 1 MB the same way people were free to choose continue using 32 MB before, none force you anywhere, but you force me to keep using 1 MB. Do you see the difference between dictatorship verus freedoom here ?
No. Stop with the "majority" nonsense when talking about 75%. For an upgrade of the network we need 'almost everyone' (obviously you can't achieve 100% but can come close to it) on board else you break consensus. You are essentially forcing the 1/4 of the current network (and this is a lot of people, hashpower and possibly even more merchants) to join the fork or be left on a slow and dying chain. This is not freedom.

Could you link me to the source of boldface above? I keep searching https://bitcoin.org/bitcoin.pdf, and getting zero hits.
Perhaps alternative spelling?
Or are you simply making stuff up/using air quotes?

If you are really interested in why 75% is not enough, the definitive answer is found here:

http://organofcorti.blogspot.com/2015/08/bip101-implementation-flaws.html
legendary
Activity: 2674
Merit: 3000
Terminated.
(bold not in original)

Please define what you mean by 'upgraded client'.
A client that supports Segwit after the activation occurs. Those clients can download and validate the data.

If such a client is getting a 'capacity boost', the only way this can be accomplished is by that node ignoring signature data. Ignoring signature data in and of itself makes that node dependent upon others to perform validation. Accordingly, such a node cannot operate in a trustless manner. It is insecure.
I never said that old nodes would be secure, did I?
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
I can't tell what is subject and what is object in your reply. But if I have your words parsed properly, than I believe you are making a false statement. Let me try again.
My post is valid. In Segwit transacting between upgraded clients becomes more efficient; there is no increase in capacity for nodes that have not upgraded (or non-Segwit nodes). They are able to receive the data but are unable to validate it. If it is able to validate it then the client it is a segwit node. Not sure if this was changed in any way since the last time I've read about it (there's also a proposal in regards to it by Peter Todd which I've yet to fully read).

(bold not in original)

Please define what you mean by 'upgraded client'.

If such a client is getting a 'capacity boost', the only way this can be accomplished is by that node ignoring signature data. Ignoring signature data in and of itself makes that node dependent upon others to perform validation. Accordingly, such a node cannot operate in a trustless manner. It is insecure.
legendary
Activity: 2674
Merit: 3000
Terminated.
I can't tell what is subject and what is object in your reply. But if I have your words parsed properly, than I believe you are making a false statement. Let me try again.
My post is valid. In Segwit transacting between upgraded clients becomes more efficient; there is no increase in capacity for nodes that have not upgraded (or non-Segwit nodes). They are able to receive the data but are unable to validate it. If it is able to validate it then the client it is a segwit node. Not sure if this was changed in any way since the last time I've read about it (there's also a proposal in regards to it by Peter Todd which I've yet to fully read).
legendary
Activity: 3080
Merit: 1688
lose: unfind ... loose: untight
Yes. Seriously. Did you buy this account?
Would you like me to create a signed message using the 1ciyam address to prove it?

No need. I'll interpret this merely as you replying 'No'.

It is just that I remember interactions in years past whereby I thought you typically provided solid reasoning. Your posts of the last month or so don't seem so to me. More along the lines of axioms stated as proven fact followed by conclusions with no intervening reasoning. Sorry - just calling it as I see it.
Pages:
Jump to: