How would you implement replay protection for a soft fork, there is only a single chain...
soft or hard.
there are scenario's of staying as one chain (just orphan drama and being either small drama or mega clusterf*ck of orphans before settling down to one chain) dependant on % of majority..
but in both soft or hard a second chain can be produced. but this involves intentionally ignoring consensus orphaning mechanism.. in laymens: not connecting to opposing nodes to see their different rules/chain, to then build own chain without protocol arguing(orphaning)
OK, but I would call that a hardfork ('ignoring consensus orphaning mechanism').
5. Because of a block verification processing time vulnerability that increases quadratically with block size, increasing the block size is only possible AFTER SegWit is active and only for SegWit transactions.
False. Parallel validation routes around quadratic hash time issues, by naturally orphaning blocks that take an inordinate time to verify.
I did not look into it but from what I hear it sounds more like a resource consuming band aid. Why not a proper fix with less CPU cycles?
It is not so much a resource consuming band aid, as it is harnessing the natural incentive of greed on the part of the miners (you know, the same force that makes bitcoin work at all) to render the issue a non-problem.
Seems like it gives an incentive to mine small blocks? One would have to check the implications of this change really thoroughly...
Yes, it takes more memory to validate multiple blocks on different threads at the same time than a single block on a single thread. But this does not only lead to an incentive to not make blocks that take long to validate due to the O(n^2) hashing issue, it also provides a natural backpressure on excessively long-to-validate blocks for any reason whatsoever. Perhaps merely blocks that are huge numbers of simple transactions. And the resource requirements only increase linearly with the number of blocks currently being hashed concurrently by a single node.
But quadratically with block size meaning at 16MB blocks or so a 30% miner might still be able to block all nodes permanently.
More importantly, as miners who create blocks exhibiting this quadratic hash time issue have their blocks orphaned, they will be bankrupted. Accordingly, the creation of these blocks will be disincentivized to the point where they just plain won't be built.
For an attacker disrupting the network for a while might pay of via puts or rising altcoins or just by hurting Bitcoin.
Further, parallel validation is the logical approach to the problem. When one receives a block while still validating another, you need to consider that the first block under validation may be fraudulent. The sooner you find a valid block is the sooner you can get mining on the next block. Parallel validation allows one to find the valid block without having to wait until detection that the fraudulent block is fraudulent is accomplished. Not to mention the stunning fact that other miners do not currently mine at all while validating a block which may be fraudulent.
See above, might give a bad advantage to small blocks.
Last, in the entire 465,185 block history of Bitcoin, there has been (to my knowledge) exactly one such aberrant block ever added to the chain. And parallel validation was not available at the time. But the network did not crash. It paused for a slight bit, then carried on as if nothing untoward ever happened. The point is that, while such blocks are a nuisance, they are not a systemic problem even without parallel validation. And parallel validation routes around this one-in-a-half-million (+/-) event.
This is because blocks were and are small.
By all means, the O(n^2) hash time is suboptimal. We should replace it with a better algorithm at some date. But to focus on it as if it is even relevant to the current debate is ludicrous. It would be ludicrous even without the availability of parallel validation. The fact that BU implements parallel validation makes putting this consideration at the center of this debate ludicrous^2.
The superior solution is on the table, well tested and ready to be deployed. Parallel validation still require additional limitations as suggested by franky1 for larger blocks. Also let me remind you of the resource discussion further up. Of course it is relevant to this debate. Why do you oppose the technically sound and sustainable solution? Particularly as it happens to also bring other important benefits?