Pages:
Author

Topic: can someone point me to hard (objective) technical evidence AGAINST SegWit? (Read 2597 times)

legendary
Activity: 4214
Merit: 4458
Also interesting to note that bitcoin switched back to BerkeleyDB after bugs in LevelDB were found and peoples' databases were getting corrupted...

impoosible (sarcasm)
core are immortal and indestructible gods.
they are needed or bitcoin would die

core never make mistakes. and everyone should(sarcasm again) run core, get rid of diversity, give core their Tier network and then bow down to the overlord of perfection.

P.S sipa was involved in the 2013 transistion from berkely to leveldb and didnt spot the lock bug. hmm. sipa, oh yea the head honcho of segwit..
hero member
Activity: 686
Merit: 504
When you say "block chain protocol hard fork", you likely mean "bitcoin consensus protocol changes". Which is exactly what happened in 2013, on an emergency basis. The errant code (due to levelDB vs. BerkeleyDB locking differences) validated malformed transactions and blocks, using 60% of the global hashpower. The 60% was overwhelming the correct blocks, and people noticed a large deviation in broadcasted blocks.
  As a fix, the blockchain was voluntarily rolled back in time by 12 hours, new blocks were built on top of it with a patched miner, and a ton of broken blocks were discarded and invalidated. (These blocks incidentally could be considered orphaned)

Also interesting as you point out, that the block size was limited to 512K during this bugfix.

My question is: was any aspect of the block chain itself modified ?  I have the impression (didn't look into the details) that this was an internal bug to core code not respecting the correct construction of BLOCKS and accepting them as valid *while in fact they weren't*.  So one just found out somewhat late that one had been building *an invalid chain*, but discarding (even late) an invalid chain is not the same thing as *modifying how a chain should be build* (THAT is a hard fork in protocol).


the block structure and chain was fine. non of the bitcoin consensus rules were broken.

the bug was how the blocks got saved to peoples hard drives in their local databases.
people using (at the time) leveldb could save the blockchain.. but if using berekly by not upgrading they were having issues saving the blockchain because it was using up all the locks of their berkelydb as the blocks grew beyond 500k (even when consensus had 1mb limit)..

so it was not about consensus rules. but bad database issues


What's interesting is that Gavin stated at the time:

Code:
With the insufficiently high BDB lock configuration, it implicitly had become a network consensus rule determining block validity 
(albeit an inconsistent and unsafe rule, since the lock usage could vary from node to node).

I guess it boils down to semantics... nonetheless it's very interesting to think about what would've happened if not everybody agreed on the solution to the fork. For example, the 0.7 fork could've been sustained. I suppose in that case the 0.8 nodes would've lost coins, or bitcoin could've split. Also interesting to note that bitcoin switched back to BerkeleyDB after bugs in LevelDB were found and peoples' databases were getting corrupted...
hero member
Activity: 770
Merit: 629
When you say "block chain protocol hard fork", you likely mean "bitcoin consensus protocol changes". Which is exactly what happened in 2013, on an emergency basis. The errant code (due to levelDB vs. BerkeleyDB locking differences) validated malformed transactions and blocks, using 60% of the global hashpower. The 60% was overwhelming the correct blocks, and people noticed a large deviation in broadcasted blocks.
  As a fix, the blockchain was voluntarily rolled back in time by 12 hours, new blocks were built on top of it with a patched miner, and a ton of broken blocks were discarded and invalidated. (These blocks incidentally could be considered orphaned)

Also interesting as you point out, that the block size was limited to 512K during this bugfix.

My question is: was any aspect of the block chain itself modified ?  I have the impression (didn't look into the details) that this was an internal bug to core code not respecting the correct construction of BLOCKS and accepting them as valid *while in fact they weren't*.  So one just found out somewhat late that one had been building *an invalid chain*, but discarding (even late) an invalid chain is not the same thing as *modifying how a chain should be build* (THAT is a hard fork in protocol).


the block structure and chain was fine. non of the bitcoin consensus rules were broken.

the bug was how the blocks got saved to peoples hard drives in their local databases.

That's what I thought too.  So it doesn't matter, it is internal business of core code, has nothing to do with the block chain protocol.  Someone having written his own version of bitcoin code, implementing the bitcoin rules, would already have stalled from the first bad block onwards, just to resume when core got his software in agreement with the protocol again, under the assumption that all miners are running core.

For instance, suppose that because of a bug in core software, the coinbase of a block jumps to 200 coins per block, and that this goes on for a month.  During this month, in fact, these blocks are simply *wrong* but are nevertheless, erroneously, accepted.  In fact, the error made a hard fork.  When the bug is *repaired*, suddenly, all these blocks are correctly seen as invalid, and the chain is seen as stalled a month ago.  So now, people can start building blocks upon the stalled, correct chain.   If that happens, I don't call it a hard fork, because the true protocol is still valid on the rebuilt chain. (while in fact it wasn't on the erroneous chain for more than a month).

coinbase jumping to 200 coins for a month??

well if everyone (your utopia) was running the exact same core software.. then it would go on for months.
but if nodes were diverse. then they would orphan off the block because it breaks the real consensus rule of 12.5btc right now. and so if the network was diverse enough to not have a majority running the buggy core. those blocks would get orphaned

Point is, if all the miners were running that code, then there wouldn't be any other block chain around.  So all those diverse nodes wouild simply come to a halt, until the miners started making correct blocks again on the last one that was correct.  All the time they were making an erroneous block chain, no good one was around, and those diverse nodes would simply stop for that time, not finding one correct block on the network.

Quote
and the pools running non core wont be making 200coin blocks. so the network wont be running for a month of 200 coins.

The POOLS, yes.  If they were not running core, they would make good blocks, but not many, because they would have very low hash rate if the majority was wasting their hash rate on erroneous blocks, and difficulty can only adjust after 2000 blocks, so there would then be a good block every few hours or so.  I made the assumption that all miners (say, the 20 of them) were running core.  

Quote
but in your utopia of everyone running core then yea expect more issues where after a month, those 4032 blocks of 200coins each block would eventually get orphaned and anyone making transactions during that month will see their transactions of that month disappear. causing merchants who accepted them 4032 blocks and thought they got paid, find out they no longer got paid because their customers transactions of the month no longer exist.

Indeed.  The joys of non-bilateral hard forks Smiley
hero member
Activity: 770
Merit: 629

Actually you do have to keep the old chain, Checkpoint stop any reorgs before them, but they don't have the transaction data.
If you figure out a way to add a checkpoint and have it truly be the new genesis block while keeping everyone correct amount in their wallets.

That is really not difficult on a transparent chain you know !

The miner that makes the block that is going to be the check point, simply includes also a public key of which he only has the secret key, makes a new "Genesis block" and signs it cryptographically with his secret key.  That new block is a big genesis block, with "premine UTXO" as a very big coinbase, such that the "New coin" gives coinbase coins to all previously existing UTXO.  When he has both the normal block that is going to be the checkpoint (a small, normal block that lets him compete normally with other miners, but with his public key inside) and somewhat later, he publishes his new genesis block (signed with his secret key), all nodes and miners can CHECK that he didn't cheat in making this genesis block (in other words, that all the UTXO are correctly attributed to all UTXO in the previous chain up to that point).  He will get a bigger reward in that genesis block.  The same mechanism that secures the check point, then validates the new genesis block.

From that point onward, you can forget the old chain.  Well, you can keep it a while if you want to check the validity of the genesis block for yourself. But if the check point was hard, that's in any case equivalent to accepting the chain up to that point: one can just as well accept the new genesis block.

Hup, 150 GB liberated.  
I'm interested in this topic.
How does this differ from simply launching a new coin with a monetary base that is pre-loaded with a previous currency's balance sheet?

Also, pruning works since 0.12, how is this different? I suppose that pruned nodes are not fully trusted.

Well, it is of course pruning, but with a "clean new genesis block", that can be taken as a new chain start, or as a "summary" of the previous chain.

The pruning in bitcoin is about the internal data structure of the running node, not about the distributed block chain.  I don't think the bitcoin block chain has "big pruning blocks".  You can simply chose to not download the whole of the chain if you want to, but the actual block chain is still the big and growing one.

With a new genesis block, you can simply throw away the old block chain as if it were a new coin, it is part of the block chain protocol by itself.

Note that it can be designed such that the blocks that followed upon the "check point block" are also the first valid blocks that follow upon the genesis block (same "previous hash" : the genesis block is defined that way), so that in fact you can choose to continue using the old chain, or you can choose starting from the new genesis block.

You could program such an official "check point" every year, or every week.   If you do it every week, you have a kind of "account updates", and the actual chain never needs to be longer than a week.  Although, for your comfort, you can hold a few weeks' worth of block chain.

The nice thing about the *synchronized* version of pruning (every week, at a specific block) is that everybody is using the same "account basis". 

Whether it has to be every week or every year depends on the ratio between the number of transactions per week, and the number of accounts "live". 

Hell, you could even put a lower limit on account contents, and clean out "dust".  It would mean that accounts containing less than a certain amount (say, less than the minimum fee) are NOT taken in the new genesis block, eliminating dust and the burden that goes with it.

This is already a much more scalable system than the full block chain (it puts a somewhat bigger burden on network capacity, although there is no hurry in getting the genesis block ; you can jump a few genesis blocks if you want to).

In fact, it is a transition from a "only transactions" system to an "account" system.
legendary
Activity: 2618
Merit: 1252
What is the status of the "lets do nothing camp"?

They are still running the original Satoshi client from January 3, 2009.  Grin Grin Grin
legendary
Activity: 4214
Merit: 4458
When you say "block chain protocol hard fork", you likely mean "bitcoin consensus protocol changes". Which is exactly what happened in 2013, on an emergency basis. The errant code (due to levelDB vs. BerkeleyDB locking differences) validated malformed transactions and blocks, using 60% of the global hashpower. The 60% was overwhelming the correct blocks, and people noticed a large deviation in broadcasted blocks.
  As a fix, the blockchain was voluntarily rolled back in time by 12 hours, new blocks were built on top of it with a patched miner, and a ton of broken blocks were discarded and invalidated. (These blocks incidentally could be considered orphaned)

Also interesting as you point out, that the block size was limited to 512K during this bugfix.

My question is: was any aspect of the block chain itself modified ?  I have the impression (didn't look into the details) that this was an internal bug to core code not respecting the correct construction of BLOCKS and accepting them as valid *while in fact they weren't*.  So one just found out somewhat late that one had been building *an invalid chain*, but discarding (even late) an invalid chain is not the same thing as *modifying how a chain should be build* (THAT is a hard fork in protocol).


the block structure and chain was fine. non of the bitcoin consensus rules were broken.

the bug was how the blocks got saved to peoples hard drives in their local databases.
people using (at the time) leveldb could save the blockchain.. but if using berekly by not upgrading they were having issues saving the blockchain because it was using up all the locks of their berkelydb as the blocks grew beyond 500k (even when consensus had 1mb limit)..

so it was not about consensus rules. but bad database issues

devs didnt peer review what would happen in such cases because.

For instance, suppose that because of a bug in core software, the coinbase of a block jumps to 200 coins per block, and that this goes on for a month.  During this month, in fact, these blocks are simply *wrong* but are nevertheless, erroneously, accepted.  In fact, the error made a hard fork.  When the bug is *repaired*, suddenly, all these blocks are correctly seen as invalid, and the chain is seen as stalled a month ago.  So now, people can start building blocks upon the stalled, correct chain.   If that happens, I don't call it a hard fork, because the true protocol is still valid on the rebuilt chain. (while in fact it wasn't on the erroneous chain for more than a month).

coinbase jumping to 200 coins for a month??

well if everyone (your utopia) was running the exact same core software.. then it would go on for months.
but if nodes were diverse. then they would orphan off the block because it breaks the real consensus rule of 12.5btc right now. and so if the network was diverse enough to not have a majority running the buggy core. those blocks would get orphaned

and the pools running non core wont be making 200coin blocks. so the network wont be running for a month of 200 coins.

but in your utopia of everyone running core then yea expect more issues where after a month, those 4032 blocks of 200coins each block would eventually get orphaned and anyone making transactions during that month will see their transactions of that month disappear. causing merchants who accepted them 4032 blocks and thought they got paid, find out they no longer got paid because their customers transactions of the month no longer exist.

this is why diversity matters

hero member
Activity: 770
Merit: 629
When you say "block chain protocol hard fork", you likely mean "bitcoin consensus protocol changes". Which is exactly what happened in 2013, on an emergency basis. The errant code (due to levelDB vs. BerkeleyDB locking differences) validated malformed transactions and blocks, using 60% of the global hashpower. The 60% was overwhelming the correct blocks, and people noticed a large deviation in broadcasted blocks.
  As a fix, the blockchain was voluntarily rolled back in time by 12 hours, new blocks were built on top of it with a patched miner, and a ton of broken blocks were discarded and invalidated. (These blocks incidentally could be considered orphaned)

Also interesting as you point out, that the block size was limited to 512K during this bugfix.

My question is: was any aspect of the block chain itself modified ?  I have the impression (didn't look into the details) that this was an internal bug to core code not respecting the correct construction of BLOCKS and accepting them as valid *while in fact they weren't*.  So one just found out somewhat late that one had been building *an invalid chain*, but discarding (even late) an invalid chain is not the same thing as *modifying how a chain should be build* (THAT is a hard fork in protocol).

For instance, suppose that because of a bug in core software, the coinbase of a block jumps to 200 coins per block, and that this goes on for a month.  During this month, in fact, these blocks are simply *wrong* but are nevertheless, erroneously, accepted.  In fact, the error made a hard fork.  When the bug is *repaired*, suddenly, all these blocks are correctly seen as invalid, and the chain is seen as stalled a month ago.  So now, people can start building blocks upon the stalled, correct chain.   If that happens, I don't call it a hard fork, because the true protocol is still valid on the rebuilt chain. (while in fact it wasn't on the erroneous chain for more than a month).

hero member
Activity: 686
Merit: 504

Actually you do have to keep the old chain, Checkpoint stop any reorgs before them, but they don't have the transaction data.
If you figure out a way to add a checkpoint and have it truly be the new genesis block while keeping everyone correct amount in their wallets.

That is really not difficult on a transparent chain you know !

The miner that makes the block that is going to be the check point, simply includes also a public key of which he only has the secret key, makes a new "Genesis block" and signs it cryptographically with his secret key.  That new block is a big genesis block, with "premine UTXO" as a very big coinbase, such that the "New coin" gives coinbase coins to all previously existing UTXO.  When he has both the normal block that is going to be the checkpoint (a small, normal block that lets him compete normally with other miners, but with his public key inside) and somewhat later, he publishes his new genesis block (signed with his secret key), all nodes and miners can CHECK that he didn't cheat in making this genesis block (in other words, that all the UTXO are correctly attributed to all UTXO in the previous chain up to that point).  He will get a bigger reward in that genesis block.  The same mechanism that secures the check point, then validates the new genesis block.

From that point onward, you can forget the old chain.  Well, you can keep it a while if you want to check the validity of the genesis block for yourself. But if the check point was hard, that's in any case equivalent to accepting the chain up to that point: one can just as well accept the new genesis block.

Hup, 150 GB liberated.  
I'm interested in this topic.
How does this differ from simply launching a new coin with a monetary base that is pre-loaded with a previous currency's balance sheet?

Also, pruning works since 0.12, how is this different? I suppose that pruned nodes are not fully trusted.

hero member
Activity: 686
Merit: 504

(apart from the hard fork in 2014 ?  Hard fork in the block chain protocol ??)


Yes there was an emergency hard fork in March 2013 (not 2014, I stand corrected). 60% of mining hashpower was building on a broken blockchain due to an incompatibility introduced by the switch to LevelDB. The blockchain was rolled back and nodes quickly downgraded to the previous 0.7 version.
 
https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki

New code with new rules for validation were rolled out on an emergency basis. Network and pool operators worked together and the issue was resolved in 3 days.


Of course devs don't like to admit that hard forking can work just fine, even on an emergency basis with no warning. They will tell you that "the network is too big now", "there are malicious actors", "you can't get consensus with this many people now", etc. And yet they claim that soft forking is "completely safe"...

Some have argued that every time a block gets orphaned there is a hard fork. Semantics.

It didn't occur to me that that was a block chain protocol hard fork, in the sense that the blocks made afterwards were incompatible with the valid protocol before (definition of a hard fork) ?

(and no, orphaned blocks are not a "hard fork" Smiley )

That said, what happened just before WAS strictly speaking a hard fork, when the block size limit of 500 KB was raised !  Funny how that was possible back then when it was a purely technical parameter, and causes so much troubles right now !




When you say "block chain protocol hard fork", you likely mean "bitcoin consensus protocol changes". Which is exactly what happened in 2013, on an emergency basis. The errant code (due to levelDB vs. BerkeleyDB locking differences) validated malformed transactions and blocks, using 60% of the global hashpower. The 60% was overwhelming the correct blocks, and people noticed a large deviation in broadcasted blocks.  As a fix, the blockchain was voluntarily rolled back in time by 12 hours, new blocks were built on top of it with a patched miner, and a ton of broken blocks were discarded and invalidated. (These blocks incidentally could be considered orphaned)

Also interesting as you point out, that the block size was limited to 512K during this bugfix.

hero member
Activity: 770
Merit: 629
Actually you do have to keep the old chain, Checkpoint stop any reorgs before them, but they don't have the transaction data.
If you figure out a way to add a checkpoint and have it truly be the new genesis block while keeping everyone correct amount in their wallets.

That is really not difficult on a transparent chain you know !

The miner that makes the block that is going to be the check point, simply includes also a public key of which he only has the secret key, makes a new "Genesis block" and signs it cryptographically with his secret key.  That new block is a big genesis block, with "premine UTXO" as a very big coinbase, such that the "New coin" gives coinbase coins to all previously existing UTXO.  When he has both the normal block that is going to be the checkpoint (a small, normal block that lets him compete normally with other miners, but with his public key inside) and somewhat later, he publishes his new genesis block (signed with his secret key), all nodes and miners can CHECK that he didn't cheat in making this genesis block (in other words, that all the UTXO are correctly attributed to all UTXO in the previous chain up to that point).  He will get a bigger reward in that genesis block.  The same mechanism that secures the check point, then validates the new genesis block.

From that point onward, you can forget the old chain.  Well, you can keep it a while if you want to check the validity of the genesis block for yourself. But if the check point was hard, that's in any case equivalent to accepting the chain up to that point: one can just as well accept the new genesis block.

Hup, 150 GB liberated.  

Note that this genesis block is huge.  But much smaller than the old chain.  The miner winning the "check point block" is also the sole guy being able to publish a genesis block (and a special reward for that miner) with his secret key signature.  If he makes a wrong genesis block, with his signature, he lost his "turn" and the next block is then the "check point" block.  So in fact, from the check point onward, all blocks have public keys of the corresponding miners.  The miner publishing the correct genesis block that was signed with the earliest key is the winner.  It can take a few block periods to publish this block because it is big, but there is no hurry. Normally, it will be the first miner if he's not an idiot publishing a wrong genesis block.
legendary
Activity: 1092
Merit: 1000
Strictly theoretically, these things don't make sense in a consensus finding algorithm.  In practice, they do, because crypto is much, much much more centralized than people want to admit.  But in perfect decentralization, there's no way in which check points can be used, because once you have disagreeing check points, there's no way to come to consensus.

In fact, if there are check points, there's not even a reason to keep the block chain before that check point.  You can keep a hard-signed list of all UTXO by the software editor at that point too, as a big genesis block at that point.  A hard coded check point is in fact nothing else but a new genesis block.  No need to keep the old chain before that.

And if consensus diverged at that point, you simply have two new genesis blocks and two chains that can never merge again.


Actually you do have to keep the old chain, Checkpoint stop any reorgs before them, but they don't have the transaction data.
If you figure out a way to add a checkpoint and have it truly be the new genesis block while keeping everyone correct amount in their wallets.
Then you have something very valuable, as their would be Buyers that would pay a lot for a running genesis block technology.
It would solve the issue of blockchain bloat almost overnight.  Smiley

 Cool
hero member
Activity: 770
Merit: 629

Whether you agree of disagree with Hard coded checkpoints, they are used in BTC , LTC, and every other Altcoin.
Whether or not someone coding their own wallet disable the hard coded checkpoint is up to them.
But if the Majority of Users leave them, the network will follow the hard coded checkpoints.

 Cool

FYI:
Some coins use Rolling Checkpoints, where after 12 hours no reorgs are allowed.
Some coins use checkpoint servers , that is very centralized, compared to the other ways.

Strictly theoretically, these things don't make sense in a consensus finding algorithm.  In practice, they do, because crypto is much, much much more centralized than people want to admit.  But in perfect decentralization, there's no way in which check points can be used, because once you have disagreeing check points, there's no way to come to consensus.

In fact, if there are check points, there's not even a reason to keep the block chain before that check point.  You can keep a hard-signed list of all UTXO by the software editor at that point too, as a big genesis block at that point.  A hard coded check point is in fact nothing else but a new genesis block.  No need to keep the old chain before that.

And if consensus diverged at that point, you simply have two new genesis blocks and two chains that can never merge again.

legendary
Activity: 1092
Merit: 1000
It didn't occur to me that that was a block chain protocol hard fork, in the sense that the blocks made afterwards were incompatible with the valid protocol before (definition of a hard fork) ?

(and no, orphaned blocks are not a "hard fork" Smiley )

That said, what happened just before WAS strictly speaking a hard fork, when the block size limit of 500 KB was raised !  Funny how that was possible back then when it was a purely technical parameter, and causes so much troubles right now !


All they have to do is Hard Code a Checkpoint into the Client code that corresponds to a Block Number.
No Reorganization can occur before that checkpoint , no matter even if you controlled 100% of the mining.

But you start from the idea that people are using a centrally designed code.  The idea of a decentralized system is that there are hundreds or thousands of different client codes in principle, in other words, that ideally, everybody writes his own code.

Putting a hard coded check point is a bit like thinking that one can ban certain web sites from being visited by putting a hard coded ban in the official, centralized, unique web browser software, no ?  A priori everybody can recompile his version of web browser, with is preferred check points.

It seems that people attach a lot of importance to the rules in the code, but that is because most crypto currencies are totally centralized in their coding, and only one entity is writing the code.  Core had that centralized power until recently in bitcoin, and in most other coins the "dev team" is the centralized decision force ; usually about the *protocol*.  However, if they start putting block chain check points in the code, they are also doing transaction consensus centralization.



Whether you agree of disagree with Hard coded checkpoints, they are used in BTC , LTC, and every other Altcoin.
Whether or not someone coding their own wallet disable the hard coded checkpoint is up to them.
But if the Majority of Users leave them, the network will follow the hard coded checkpoints.

 Cool

FYI:
Some coins use Rolling Checkpoints, where after 12 hours no reorgs are allowed.
Some coins use checkpoint servers , that is very centralized, compared to the other ways.
hero member
Activity: 770
Merit: 629
It didn't occur to me that that was a block chain protocol hard fork, in the sense that the blocks made afterwards were incompatible with the valid protocol before (definition of a hard fork) ?

(and no, orphaned blocks are not a "hard fork" Smiley )

That said, what happened just before WAS strictly speaking a hard fork, when the block size limit of 500 KB was raised !  Funny how that was possible back then when it was a purely technical parameter, and causes so much troubles right now !


All they have to do is Hard Code a Checkpoint into the Client code that corresponds to a Block Number.
No Reorganization can occur before that checkpoint , no matter even if you controlled 100% of the mining.

But you start from the idea that people are using a centrally designed code.  The idea of a decentralized system is that there are hundreds or thousands of different client codes in principle, in other words, that ideally, everybody writes his own code.

Putting a hard coded check point is a bit like thinking that one can ban certain web sites from being visited by putting a hard coded ban in the official, centralized, unique web browser software, no ?  A priori everybody can recompile his version of web browser, with is preferred check points.

It seems that people attach a lot of importance to the rules in the code, but that is because most crypto currencies are totally centralized in their coding, and only one entity is writing the code.  Core had that centralized power until recently in bitcoin, and in most other coins the "dev team" is the centralized decision force ; usually about the *protocol*.  However, if they start putting block chain check points in the code, they are also doing transaction consensus centralization.
legendary
Activity: 1092
Merit: 1000
It didn't occur to me that that was a block chain protocol hard fork, in the sense that the blocks made afterwards were incompatible with the valid protocol before (definition of a hard fork) ?

(and no, orphaned blocks are not a "hard fork" Smiley )

That said, what happened just before WAS strictly speaking a hard fork, when the block size limit of 500 KB was raised !  Funny how that was possible back then when it was a purely technical parameter, and causes so much troubles right now !


All they have to do is Hard Code a Checkpoint into the Client code that corresponds to a Block Number.
No Reorganization can occur before that checkpoint , no matter even if you controlled 100% of the mining.

Here is a link to what happened on March 13th.
https://bitcoinmagazine.com/articles/bitcoin-network-shaken-by-blockchain-fork-1363144448/

 Cool

FYI:
What happen on March 13 was more of a history rewrite by overwriting the shorter chain, by a collusion of ~70% of the miners (requested by BTC core Devs.)
hero member
Activity: 1092
Merit: 520

HAHA i got to the 1st line in the 1st link and it said, and i quote " instead, bitcoin unlimited is safe and simple" ... lol i gave up reading after that. The OP asked for no Bias. Smiley
hero member
Activity: 770
Merit: 629

(apart from the hard fork in 2014 ?  Hard fork in the block chain protocol ??)


Yes there was an emergency hard fork in March 2013 (not 2014, I stand corrected). 60% of mining hashpower was building on a broken blockchain due to an incompatibility introduced by the switch to LevelDB. The blockchain was rolled back and nodes quickly downgraded to the previous 0.7 version.
 
https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki

New code with new rules for validation were rolled out on an emergency basis. Network and pool operators worked together and the issue was resolved in 3 days.


Of course devs don't like to admit that hard forking can work just fine, even on an emergency basis with no warning. They will tell you that "the network is too big now", "there are malicious actors", "you can't get consensus with this many people now", etc. And yet they claim that soft forking is "completely safe"...

Some have argued that every time a block gets orphaned there is a hard fork. Semantics.

It didn't occur to me that that was a block chain protocol hard fork, in the sense that the blocks made afterwards were incompatible with the valid protocol before (definition of a hard fork) ?

(and no, orphaned blocks are not a "hard fork" Smiley )

That said, what happened just before WAS strictly speaking a hard fork, when the block size limit of 500 KB was raised !  Funny how that was possible back then when it was a purely technical parameter, and causes so much troubles right now !

hero member
Activity: 686
Merit: 504

(apart from the hard fork in 2014 ?  Hard fork in the block chain protocol ??)


Yes there was an emergency hard fork in March 2013 (not 2014, I stand corrected). 60% of mining hashpower was building on a broken blockchain due to an incompatibility introduced by the switch to LevelDB. The blockchain was rolled back and nodes quickly downgraded to the previous 0.7 version.
 
https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki

New code with new rules for validation were rolled out on an emergency basis. Network and pool operators worked together and the issue was resolved in 3 days.


Of course devs don't like to admit that hard forking can work just fine, even on an emergency basis with no warning. They will tell you that "the network is too big now", "there are malicious actors", "you can't get consensus with this many people now", etc. And yet they claim that soft forking is "completely safe"...

Some have argued that every time a block gets orphaned there is a hard fork. Semantics.
hero member
Activity: 994
Merit: 544
I could find none. Only political whimsical yadda yadda.
In order to exclude a cognitive BIAS, I hereby ask to be pointed to $SUBJECT.



Rico

Segwit is good and I can say that it is much better than HF since HF is a danger to bitcoin. But even if segwit is safe there will be problems that will come later on. But let us just watch how segwit will perform since it is already  clear that segwit will use Lightning Network. As of this time it is hard to say if there are serious advantages.
hero member
Activity: 770
Merit: 629
But what is the alternative, to sit and do nothing beyond hoping endless blocksize increases can cope somehow with user adoption rates? I suppose an altcoin can be cajoled into doing SW first, but that should have been done a year ago if really needed.

They already tried to ram Segwit down Litecoin users' throats:

https://www.litecoinpool.org/pools
How much of the hashing power is signaling SegWit support? NEW!
    About 540 GH/s, or 22.0% of the network (110 out of the last 500 blocks).

Total fail. No one wants this!

The dilemma Bitcoin is now in runs as follows: Cryptocurrency is rapidly innovating, and coins that lack the feature set and capabilities of newer coins are likely to die out no matter how popular they once were. (That's why we no longer see Model T's on the road.) On the flip side, technical issues or exploits associated with changes can kill a coin real fast.

I agree with this assessment. However, bitcoin has always had flaws other than the artificially constrained blocksize and the resultant high fees and slow confirmations. Difficulty re-targeting is broken. The currency emission rate is not optimal, etc.. But Bitcoin has the largest market cap and a huge lead in adoption, credibility, and investment.   That said, its first mover advantage is fading as we speak.


Core and Bitcoin Unlimited now exemplify the dichotomy in how Bitcoin approaches these two risks. On the one hand we have the "Go-too-slow" approach of Core, which has been patiently waiting for miners to adopt SW/LN while the rest of the crypto space keeps advancing. On the other hand we have the buggy development of hard-charging Bitcoin Unlimited and their "You have to fork it to find out what's in it" approach handing over protocol power to the miners in an unprecedented and risky manner.

You present a false dichotomy - firstly they are not opposites, secondly they are not the only options on the table. Segwit is a wonky code fork that will be forgotten in 2 years. BU is a wacky attempt to challenge Core's status quo, and will also likely be forgotten in two years. Hard fork to 2MB blocks with no other changes from 0.12 is still possible at any time. A bitcoin hard fork was done in 2014 without issue.

Wise words.  Agree with everything in this post.
(apart from the hard fork in 2014 ?  Hard fork in the block chain protocol ??)
Pages:
Jump to: