First of all because of the complexity of Segwit agreement will be difficult to find, in principle I do actually like SegWit I would like to see it implemented.
-snip-
It don't see the problem here. Opting to implement it via a SF or HF is just preference, and the Core developers do not like HF's (for various reasons). I think that Segwit is receiving adequate testing. It is not as complex as you think.
Whether you agree with it or not the big blockist consider the current situation to be hurting adoption and dire if there is a sudden spike in transaction volume, keep in mind that some small blocksist want the blocks to be full in general and the big blockists consider this to be undesirable, try and appreciate the other sides perspective that way it will be more likely that we will be able to reach a good compromise.
But you can't push a HF like Classic proposes, those rules make it controversial. Even Garzik suggests a 3 to 6 month
minimum grace period. Even if everyone was on-board with it by April, that would mean it would get deployed sometime in Q3. Segwit would already given us more breathing room by then.
Segwit also just does not achieve enough of a throughput to be a satisfactory compromise, considering there is also much disagreement over how much of a throughput benefit Segwit will give has also been debated, since it also relies on the speed of adoption of those transaction types. This is just another example why a blocksize increase to two megabytes is a much simpler solution that everyone can agree with and be happy with for now.
But it does. The main difference that Segwit does not provide instantly a capacity increase, it grows over time. If the users/services really wanted it they'd adopt it as soon as possible.
However for now we can at least both agree that two megabytes is possible, it is something that both sides can agree with. Which means we can continue without a split. This is important, I do not think it is worth endangering this possible peace because of the preference of deploying segwit first by the small blockist camp.
See this is where the understanding is wrong. You think 2 MB block size limit would be a compromise, but it is not. Anything higher is currently deemed as unsafe, even 2 MB blocks would be if there was not that workaround by Gavin (for Classic).
Specifically cost and I would argue censorship resistance and decentralization, however I know you would disagree on these last two points, which we can with this narrative of both visions existing together.
With cost I can agree (albeit many tend to be hyperbolic when it comes to actual number), but I disagree with the other two.
I think adoption is important for increasing the node count which relates back to the blocksize, I realize this is a push and pull relationship but this is how I perceive it and why I think increasing the blocksize is what is best for decentralization.
Again, let's leave it at 'we don't have the data to prove this'.
May I dare say it we are actually having a more civil discussion here.
It does seem like it.