Full article here:
http://cointimes.tech/2016/08/29/hard-forks-and-consensus-networks-meta-questions-and-limitations/This article has been making the rounds today. Seems appropriate now that Gavin Andresen and Roger Ver are lobbying the industry with their "If we don't hard fork immediately, Bitcoin = MySpace" meme.
Some notable quotes:
Proponents of hard forks in Bitcoin and Ethereum have sought to replace the definition of consensus with that of “social consensus” – the idea that if most users agree with a certain plan, that it should be enacted even if it breaks the consensus rules. The underlying logic is that if most people agree to a hard fork, the existing consensus can be subverted, and the majority rulers can economically coerce the minority into migrating networks against their will.
Generally, this understates the risks associated with contentious hard forks by falsely assuming that only one blockchain will survive. Of note, opting in to the forked protocol does not revoke your consent to the original protocol’s rules, and individual users may seek to maximize the value of their tokens held on both chains.
After last month’s hard fork, Ethereum users now understand that it is very dangerous to assume that a minority fork will simply die. If users remain on the original network – suggesting existing demand for its newly minted tokens – the original chain will be worthy to rationally mine. In this way, multiple blockchains emerge from a contentious hard fork. As the ETH/ETC debacle demonstrated, speculators can further challenge a hard fork by establishing market demand for the original chain’s token, ensuring a network split by incentivizing miners to secure the original network.
Perhaps more importantly, this method of governance is in direct contradiction with the basic security premises of Bitcoin (or any similar distributed consensus network). Even if we accept the practical argument that the fear of economic loss associated with mining/transacting on the minority chain is enough to force the minority to migrate to the hard-forked network, the idea should be opposed on philosophical grounds. When you opt in to the network, you and all participants enforce the consensus rules. This entails rejecting invalid blocks – not abandoning the consensus rules anytime 51% (or 75%) of miners tell you to. Such attempts to break consensus are an attack on the very idea of participating in a consensus network. If a majority of miners can coerce the network into abandoning the rules every user has agreed to, only by virtue of its hash power, then Nick Szabo is correct to call this “technologically equivalent to a 51% attack.”
For years, many Bitcoin users have complained about the lack of a “killer app” that promotes Bitcoin adoption to the mainstream. On the contrary, Bitcoin’s “killer app” was released at inception: math-based, censorship-resistant money on a decentralized inflation-controlled ledger. This is Bitcoin’s primary use case; this is what drives demand and serves as a basis for its value. The risk of a network split initiated by a contentious hard fork is a significant threat to that use case, and to the very idea of a cohesive, global ledger. Pieter Wuille elaborates:
“No matter how you determine the switchover date, there is no way of knowing when (and whether at all) everyone changes their full nodes (and perhaps other software), and even very high hash power votes cannot prevent an actual fork from appearing afterwards. At best, people lose the guarantee that their confirmations are meaningful (because at some point it becomes clear that the other side will get adopted, and they need to switch). At worst, a fork persists, and two partitions appear, in each of which you can spend every pre-existing coin. This defeats the primary purpose Bitcoin was designed for: double spend protection.”
Readers should ask themselves: Do you believe that miners ought to be able to change the rules that we, the users, consented to? If the answer is yes, then you have imbued miners with the power of central banks. Non-mining node operators do not have identical interests to those of miners; non-mining nodes serve as a check on the power of miners. Refusing to trust miners and individually enforcing the protocol’s rules is an individual’s only protection against collusion by miners (or others) against him/her. In the absence of decentralized node validation, there is no effective difference between miners and central banks; there are no rules by which they must abide. If you grant miners authority over consensus rules, you have sacrificed the fundamental security provided by the full node security model – your money is no longer safe. It is tempting to use miner distribution as a voting mechanism, but it simply has no relation to user consent and thus, should have no bearing on the consensus rules.
To be clear, retaining immutable consensus and therefore favoring soft fork solutions to protocol limitations does not mean that progress nor development must stagnate. On the contrary, in regards to the block size debate, soft forks will allow for incredible improvements to both bandwidth and non-bandwidth scaling without the risks associated with hard forks. Instead of merely allowing more transaction throughput by increasing maxblocksize, we can drastically optimize transaction size to increase capacity through mechanisms like Schnorr signatures. Once malleability fixes are in place, the doors are opened for smart contracts that contribute via non-bandwidth scaling: Lightning Network will allow trustless contracts with no custodial risk, which will directly mitigate mainchain throughput. MAST can further optimize the size of complex smart contracts. Mining pre-validation (weak blocks) can drastically reduce critical bandwidth, resulting in fast propagation / latency mitigation and “weak” confirmations for transactions, addressing concerns over mining centralization in the context of increased throughput. Improvements like committed bloom maps, batch validation and archive nodes can further reduce resource requirements for nodes, mitigating centralization pressures as throughput increases.
Brilliant scaling solutions are before us – solutions which will directly enhance capacity while mitigating the externalities created by increased throughput. Why would we break consensus simply to increase capacity? The idea is absurd!
Compare to Roger's arguments: