Pages:
Author

Topic: Last Time. In small words. Why 2MB impossible. Why Soft Fork. Why SegWit First.. (Read 6433 times)

member
Activity: 101
Merit: 10
3) Bigger Blocks would ALREADY be here IF we had just upgraded to segwit 6 fucking months ago. This ridiculous stalemate is what is causing this total cluster fuck of a situation.  Once we get SegWit.. oh mama.. ALL the clever things people have dreamed about can START to happen. AND Bigger Blocks!.. Safely.

..

CORE are NOT your enemy.


I think OP has realized that CORE are ACTUALLY your enemy now conveniently, given the fact that most Core members are against HF even months after SegWit.
hero member
Activity: 718
Merit: 545
Bitcoin is fine, confirmed by price action. Segwit or BU will spook investors. Bitcoin is the gold standard of cryptocurrency, it is too big to change.
If we don't do anything we might as well bury Bitcoin right away. Altcoins will overtake quickly, just look at Litecoin.

As for the price action it could mean:
  • current investors have no idea what they are doing
  • current investors don't care
  • current investors are optimistic a good solution will be found in time
  • Bitcoin is undervalued despite the scaling discussion deadlock

I guess a mixture of the above...

^^ Totally agree with this.


No, Litecoin and alts aren't taking over the safe haven status of Bitcoin IF Bitcoin remains as is today. It's the safe haven status that investors are after. Buying coffee can be done with alts and credit card.

^^ Totally disagree with this.
sr. member
Activity: 378
Merit: 250
Bitcoin is fine, confirmed by price action. Segwit or BU will spook investors. Bitcoin is the gold standard of cryptocurrency, it is too big to change.
If we don't do anything we might as well bury Bitcoin right away. Altcoins will overtake quickly, just look at Litecoin.

As for the price action it could mean:
  • current investors have no idea what they are doing
  • current investors don't care
  • current investors are optimistic a good solution will be found in time
  • Bitcoin is undervalued despite the scaling discussion deadlock

I guess a mixture of the above...

No, Litecoin and alts aren't taking over the safe haven status of Bitcoin IF Bitcoin remains as is today. It's the safe haven status that investors are after. Buying coffee can be done with alts and credit card.
legendary
Activity: 4214
Merit: 4458

Quote
other non blockstream endorsed implementations are just plodding along not making threats and even laughing at blockstreams attempt to get non-blockstream implementations to split, by simply saying 'no thanks we wanna stay as a peer network'
Sorry, I can't figure out what you are saying here.  Undecided


lets kill 2 birds with one stone
1. blockstream wantis the network to split to give blockstream dominance
2. non blockstream implementations refusing to split.. wanting to keep the diverse implementations on an level/even playingfield

(gmaxwell is a founder and CTO of blockstream and also the lead dev of core and also the main moderator of technical discussions)

but in his own words
What you are describing is what I and others call a bilateral hardfork-- where both sides reject the other.

I tried to convince the authors of BIP101 to make their proposal bilateral ... Sadly, the proposals authors were aggressively against this.

The ethereum hardfork was bilateral, probably the only thing they did right--
full member
Activity: 128
Merit: 107
Bitcoin is fine, confirmed by price action. Segwit or BU will spook investors. Bitcoin is the gold standard of cryptocurrency, it is too big to change.
If we don't do anything we might as well bury Bitcoin right away. Altcoins will overtake quickly, just look at Litecoin.

As for the price action it could mean:
  • current investors have no idea what they are doing
  • current investors don't care
  • current investors are optimistic a good solution will be found in time
  • Bitcoin is undervalued despite the scaling discussion deadlock

I guess a mixture of the above...
sr. member
Activity: 378
Merit: 250
Bitcoin is fine, confirmed by price action. Segwit or BU will spook investors. Bitcoin is the gold standard of cryptocurrency, it is too big to change.
full member
Activity: 128
Merit: 107
Please explain how a softfork would cause a chain split. A pool ignoring/banning/rejecting blocks/communication means it diverted from the chain protocol which equals to a chain split in my point of view.
FTFY

a hard chain split is about the nodes(users) doing something to cause 2 chains.
a soft chain split is about the pools doing something to cause 2 chains.

EG a soft chain split is even though nodes dont have to do anything. pools ignore each other purely based on a version number where the pools then decide to either join X or ignore X and build on Y

leading to the nodes that without doing anything end up following what they can happily accept
A node will always follow the longest chain of blocks it considers valid. What have pools to do with it? Once again ignoring version numbers sounds like a protocol violation = hard fork.

But quadratically with block size meaning at 16MB blocks or so a 30% miner might still be able to block all nodes permanently.

No. As I explained (did you even read?), parallel validation routes around the quadratic hash time issue.
Yes I did but I realize now I took a wrong train of thought at some point, sorry.

It's difficult to embrace a solution by someone with a track record as bad as BU recently if there is another more sustainable solution available by someone whose code I used without issue for years.

Quote
I oppose The SegWit Omnibus Changeset mostly due to considerations other than segwit itself.
ok, this move the discussion forward.

Quote
Namely: the SF nature;
As above: SF with hashrate majority safer than hardfork, replay protection is difficult

Quote
the backdoor of versionbit changes;
From the other point of views it allows for easy upgrades. I can't imagine core could pull of anything that really goes against the will of the user majority. People would simply hard fork away. Same if they will not increase the block size if it is safely possible.

Quote
and the centrally-planned magic 4:1 ratio.
Based on historic tx data and expected SW tx sizes I guess? This is concern trolling.

Quote
But mostly because the 1.7x or 2x or whatever capacity increase it ends up being is too little
Better than nothing! And you are completely ignoring we need SegWit regardless of tx capacity.

Quote
, too late,
Faster than anything else.

Quote
and we'll just be back at this same argument before the year is up.
Possibly but by then we have learned more about larger blocks in a safe way. Also we will know more about about lightning and how much time it will be able to buy us. This could make a difference of a factor of ten or more and buy us quite some time. Think of it as an opportunity. We can always hardfork later on.

legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
But quadratically with block size meaning at 16MB blocks or so a 30% miner might still be able to block all nodes permanently.

No. As I explained (did you even read?), parallel validation routes around the quadratic hash time issue.

Also let me remind you of the resource discussion further up. Of course it is relevant to this debate. Why do you oppose the technically sound and sustainable solution? Particularly as it happens to also bring other important benefits?

There was no resource 'discussion' upthread, an inadequate strawman of resource consumption was used to cast aspersions upon parallel validation.

I oppose The SegWit Omnibus Changeset mostly due to considerations other than segwit itself. Namely: the SF nature; the backdoor of versionbit changes; and the centrally-planned magic 4:1 ratio. But mostly because the 1.7x or 2x or whatever capacity increase it ends up being is too little, too late, and we'll just be back at this same argument before the year is up.
legendary
Activity: 4214
Merit: 4458
Please explain how a softfork would cause a chain split. A pool ignoring/banning/rejecting blocks/communication means it diverted from the chain protocol which equals to a chain split in my point of view.
FTFY

a hard chain split is about the nodes(users) doing something to cause 2 chains.
a soft chain split is about the pools doing something to cause 2 chains.

EG a soft chain split is even though nodes dont have to do anything. pools ignore each other purely based on a version number where the pools then decide to either join X or ignore X and build on Y

leading to the nodes that without doing anything end up following what they can happily accept


i just read your reddit summary and laughed my head off..

you do realise the only establishment causing drama are the portfolio of DCG (blockstream, btcc, coinbase, and more) with all their REKT campaigns, accusations, Pow killing proposals, deadlines, blackmails, bribes.

other non blockstream endorsed implementations are just plodding along not making threats and ven laughing at blockstreams attempt to get non-blockstream implementations to split, by simply saying 'no thanks w wanna stay as a peer network'

put it this way right now blockstream could make a 1merkle version that actually unites the community that can offer alot more features and set off a 6 month timeline. (afterall core think its ok to release 5 versions of software /year (0.13-0.13.1-0.13.2-0.14-0.14.1))

but blockstream motives are just to push the cludgy soft 2 merkle version and even if still veto'd in november will push on as the cludge version right upto the en of 2018 and try making it mandatory.
in short they cannot take no for an answer or dcide they should do something better
full member
Activity: 128
Merit: 107
OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').

soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

soft involves just pool agreements to change something, thats just a network upgrade with one chain
hard involves nodes agreeing to change something, thats just a network upgrade with one chain

again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

when some pools disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split
when some nodes disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split

soft can intentionally cause a split
hard can intentionally cause a split

and again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

by thinking all "hard" actions = split.. and all "soft" actions = utopia.. that is taking softs best case scenario and hards worse case scenario. and avoid talking about the opposite
Please explain how a softfork would cause a chain split. A node ignoring/banning/rejecting blocks/communication means it diverted from the chain protocol which equals to a hard fork in my point of view.
legendary
Activity: 4214
Merit: 4458
OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').

soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

soft involves just pool agreements to change something, thats just a network upgrade with one chain
hard involves nodes agreeing to change something, thats just a network upgrade with one chain

again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

when some pools disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split
when some nodes disagree and decide to intentionally ignore/ban/reject blocks/communication and the opposition continues. thats a chain split

soft can intentionally cause a split
hard can intentionally cause a split

and again
soft forks do not need to result in a chain split
hard forks do not need to result in a chain split

by thinking all "hard" actions = split.. and all "soft" actions = utopia.. that is taking softs best case scenario and hards worse case scenario. and avoid talking about the opposite
full member
Activity: 128
Merit: 107
How would you implement replay protection for a soft fork, there is only a single chain...

soft or hard.
there are scenario's of staying as one chain (just orphan drama and being either small drama or mega clusterf*ck of orphans before settling down to one chain) dependant on % of majority..

but in both soft or hard a second chain can be produced. but this involves intentionally ignoring consensus orphaning mechanism.. in laymens: not connecting to opposing nodes to see their different rules/chain, to then build own chain without protocol arguing(orphaning)
OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').

5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?

It is not so much a resource consuming band aid, as it is harnessing the natural incentive of greed on the part of the miners (you know, the same force that makes bitcoin work at all) to render the issue a non-problem.
Seems like it gives an incentive to mine small blocks? One would have to check the implications of this change really thoroughly...

Quote
Yes, it takes more memory to validate multiple blocks on different threads at the same time than a single block on a single thread. But this does not only lead to an incentive to not make blocks that take long to validate due to the O(n^2) hashing issue, it also provides a natural backpressure on excessively long-to-validate blocks for any reason whatsoever. Perhaps merely blocks that are huge numbers of simple transactions. And the resource requirements only increase linearly with the number of blocks currently being hashed concurrently by a single node.
But quadratically with block size meaning at 16MB blocks or so a 30% miner might still be able to block all nodes permanently.

Quote
More importantly, as miners who create blocks exhibiting this quadratic hash time issue have their blocks orphaned, they will be bankrupted. Accordingly, the creation of these blocks will be disincentivized to the point where they just plain won't be built.
For an attacker disrupting the network for a while might pay of via puts or rising altcoins or just by hurting Bitcoin.

Quote
Further, parallel validation is the logical approach to the problem. When one receives a block while still validating another, you need to consider that the first block under validation may be fraudulent. The sooner you find a valid block is the sooner you can get mining on the next block. Parallel validation allows one to find the valid block without having to wait until detection that the fraudulent block is fraudulent is accomplished. Not to mention the stunning fact that other miners do not currently mine at all while validating a block which may be fraudulent.
See above, might give a bad advantage to small blocks.

Quote
Last, in the entire 465,185 block history of Bitcoin, there has been (to my knowledge) exactly one such aberrant block ever added to the chain. And parallel validation was not available at the time. But the network did not crash. It paused for a slight bit, then carried on as if nothing untoward ever happened. The point is that, while such blocks are a nuisance, they are not a systemic problem even without parallel validation. And parallel validation routes around this one-in-a-half-million (+/-) event.
This is because blocks were and are small.

Quote
By all means, the O(n^2) hash time is suboptimal. We should replace it with a better algorithm at some date. But to focus on it as if it is even relevant to the current debate is ludicrous. It would be ludicrous even without the availability of parallel validation. The fact that BU implements parallel validation makes putting this consideration at the center of this debate ludicrous^2.
The superior solution is on the table, well tested and ready to be deployed. Parallel validation still require additional limitations as suggested by franky1 for larger blocks. Also let me remind you of the resource discussion further up. Of course it is relevant to this debate. Why do you oppose the technically sound and sustainable solution? Particularly as it happens to also bring other important benefits?




legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?

It is not so much a resource consuming band aid, as it is harnessing the natural incentive of greed on the part of the miners (you know, the same force that makes bitcoin work at all) to render the issue a non-problem.

Yes, it takes more memory to validate multiple blocks on different threads at the same time than a single block on a single thread. But this does not only lead to an incentive to not make blocks that take long to validate due to the O(n^2) hashing issue, it also provides a natural backpressure on excessively long-to-validate blocks for any reason whatsoever. Perhaps merely blocks that are huge numbers of simple transactions. And the resource requirements only increase linearly with the number of blocks currently being hashed concurrently by a single node.

More importantly, as miners who create blocks exhibiting this quadratic hash time issue have their blocks orphaned, they will be bankrupted. Accordingly, the creation of these blocks will be disincentivized to the point where they just plain won't be built.

Further, parallel validation is the logical approach to the problem. When one receives a block while still validating another, you need to consider that the first block under validation may be fraudulent. The sooner you find a valid block is the sooner you can get mining on the next block. Parallel validation allows one to find the valid block without having to wait until detection that the fraudulent block is fraudulent is accomplished. Not to mention the stunning fact that other miners do not currently mine at all while validating a block which may be fraudulent.

Last, in the entire 465,185 block history of Bitcoin, there has been (to my knowledge) exactly one such aberrant block ever added to the chain. And parallel validation was not available at the time. But the network did not crash. It paused for a slight bit, then carried on as if nothing untoward ever happened. The point is that, while such blocks are a nuisance, they are not a systemic problem even without parallel validation. And parallel validation routes around this one-in-a-half-million (+/-) event.

By all means, the O(n^2) hash time is suboptimal. We should replace it with a better algorithm at some date. But to focus on it as if it is even relevant to the current debate is ludicrous. It would be ludicrous even without the availability of parallel validation. The fact that BU implements parallel validation makes putting this consideration at the center of this debate ludicrous^2.
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
Yup. This is exactly the nonsense that they are preaching. Let's make Bitcoin a very centralized system in which you can't achieve financial sovereignty unless you buy server grade hardware costing thousands of USD.

You have an incredibly myopic sense of scale. Allowing the system to keep up with demand requires an investment of well under 1.0 BTC. And what will you say when transactions fees rise above $10 due to the stupid artificial centrally-planned production quota? Over $100? Over 1BTC?
legendary
Activity: 4214
Merit: 4458
How would you implement replay protection for a soft fork, there is only a single chain...

soft or hard.
there are scenario's of staying as one chain (just orphan drama and being either small drama or mega clusterf*ck of orphans before settling down to one chain) dependant on % of majority..

but in both soft or hard a second chain can be produced. but this involves intentionally ignoring consensus orphaning mechanism.. in laymens: not connecting to opposing nodes to see their different rules/chain, to then build own chain without protocol arguing(orphaning)

all the reddit doomsdays FUD is about trying to only mention softs best case, and hards worse case.
but never the other way around because then people will wise up to knowing that bitcoins consensus orphaning mechanism is a good thing and that doing things as a hard consensus is a good thing.
full member
Activity: 128
Merit: 107
5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.

False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?


4. There are two possible ways to deploy/implement SegWit, as a softfork or as a hardfork. SegWit as a hardfork would allow a slightly cleaner implementation but would also require replay protection (as the exchanges have specifically asked for lately). SWSF does not require replay protection assuming a hashrate majority. Replay protection is difficult thus SegWit as a hardfork would altogether cause more technical debt than SWSF. Also a hardfork is generally considered of higher risk and would take a longer preparation time.

Sorry, it seems people have had their heads FOHK'ed with (Fear of Hard Fork).
It is not fear but the expectation of 'clusterfuck' (as you put it).

Quote
There is little difference between the dangers of a soft fork and a hard fork.

In the event of a soft fork we have:
1.) The old chain exists with a more permissive set of rules.
2.) The new chain exists with a more restrictive set of rules.
Wait a second, there only exists a single chain as the old chain blocks are being orphaned (I am explicitly talking about a softfork with a hashrate majority as stated above).

Quote
In a hard fork we have:
1.) The old chain exists with a more restrictive set of rules.
2.) The new chain exists with a more permissive set of rules.

So they look exactly the same during a chain split.
No, not at all. With the hard fork the old chain is not 'corrected' to follow the new chain.

Quote
The only difference is that a soft fork is backwards compatible because its more restrictive set of rules.

In the event of a successful soft fork, older nodes continue to operate as normal.
In the event of a successful hard fork, older nodes become unsynced and have to upgrade.
This is a big difference, isn't it?

Quote
In the event of a contentious fork, hard of soft, it becomes an economically damaging clusterfuck until the winning fork is determined (the longest chain) or a bilateral split occurs (the minority chain implements replay protection)*.
Does a 70% hashrate majority still count as contentious? I don't think that would be a big problem for a softfork, the old chain would be forced to go along, but with a hardfork there would certainly remain two chains.

Quote
* Strictly speaking the software forking away from the existing protocol (hard of soft) should be the version that implements relay protection as you cannot demand the existing protocol chain to change its behaviour. In practice though, the aim is not to create a permanent chain split and achieve consensus, so the minority chain should end up orphaned off, and any transactions that occur during any temporary chain split should end up confirmed on the main chain.
How would you implement replay protection for a soft fork, there is only a single chain...

I am considering making my list above a reddit thread as I think it sums up the current situation nicely  Grin
legendary
Activity: 924
Merit: 1000
i dont even know why im interacting with someone that cant even read c++
I don't even do C++ and it seem rather obvious that I understand more of it than someone claiming that he knows it (you). That is just sad.

Eh? for real? Then how can one understand something when one admitted to not knowing something?

A bit like me saying:-

I don't know how to make a nuclear bomb, but i understand more than the nuclear scientists. Huh Huh
legendary
Activity: 4214
Merit: 4458
thats maths cludge OUTSIDE of network consensus rules..
but from a network consensus rule its what you think
No. The 4x sigops counting for legacy transactions is enforced by SW rules.

pools can ignore the 4x sigop count just like they ignored the priority fee formulae. by not following all the wastful cludgy maths stuff outside of consensus
which is where your hopes and expectations lay..

thats why having <4 maxtxsigops in the consensus.h header file would solve the issue so easily


maybe best you spend more time managing sig-spammers and taking a cut.
because you have made it clear you wont take time to learn c++. and prefer just to spam topics with empty word baiting for an income
legendary
Activity: 2674
Merit: 2965
Terminated.
thats maths cludge OUTSIDE of network consensus rules..
but from a network consensus rule its what you think
No. The 4x sigops counting for legacy transactions is enforced by SW rules.
Pages:
Jump to: