The thing to bear in mind is that Core have an exemplary record for testing, bugfixing and just generally having an incredibly stable and reliable codebase. So while people may run SegWit2x code in the interim to make sure it's activated, I envision many of them would switch back to Core the moment Core release compatible code. As such, any loss in Core's dominance would probably only be temporary.
In short, I agree there's probably enough support to active a 2MB fork, but I disagree that Core will lose any significant market share over the long term, even if the 2MB fork creates the longest chain and earns the Bitcoin mantle.
Nokia was also good at testing and reliability, where are they now?
And Core code is shit, anyone experienced in writing kernels/drivers, or ultra low latency communication/financial/military/security systems would instantly notice:
1. The general lack of regards for L0/L1/TLB/L2/L3/DRAM latency and data locality.
2. Lack of cache line padding and alignment.
3. Lack of inline assembly in critical loops.
4. Lack of CPU and platform specific speed ups.
5. Inefficient data structures and data flow.
6. Not replacing simple if/else with branchless operations.
7. Not using __builtin_expect() to make branch predictions more accurate.
8. Not breaking bigger loops into smaller loops to make use of L0 cache (Loop tiling).
9. Not coding in a way that deliberately helps CPU prefetcher cheats time.
10. Unnecessary memory copying.
11. Unnecessary pointer chasing.
12. Using pointers instead of registers in performance sensitive areas.
13. Inefficient data storage (LevelDB? Come on, the best LevelDB devs moved onto RocksDB years ago)
14. Lack of simplicity.
15. Lack of clear separation of concerns.
16. The general pile-togetherness commonly seen in projects involving too many people of different skill levels.
The bottleneck of performance today is memory, the CPU register is 150-400 times faster than main memory, 10x that if you use the newest CPUs and code in a way to make use of all the execution units parallelly and make use of SIMD (out-of-order execution window size, 168 in Sandy Bridge, 192 in Haswell, 224 in Skylake).
One simple cache miss and you end up wasting the time for 30-400 CPU instructions. Even moving 1 byte from one core to another takes 40 nanoseconds, that's enough time for 160 instructions on a 4GHz CPU.
You take one look at Core's code and you know instantly most of the people who wrote it knows only software but not hardware, they know how to write the logic, they know how to allocate and release memory, but they don't understand the hardware they're running the code on, they don't know how electrons are being moved from one place to another inside the CPU at the nanometer level, if you don't have instinctive knowledge of hardware, you'll never be able to write great codes, good maybe, but not great.
Since inception, Core was written by amateurs or semi-professionals, picked up by other amateurs or semi-professionals, it works, there are small nugget of good code here and there, contributed by people who knew what they were doing, but over all the code is nowhere near good, not even close, really just a bunch of slow crap code written by people of different skill levels.
There are plenty of gurus out there who can make Core's code run two to four times faster without even trying. But most of them won't bother, if they're going to work for the bankers they'd expect to get paid handsomely for it.
So while people may run SegWit2x code in the interim to make sure it's activated, I envision many of them would switch back to Core the moment Core release compatible code. As such, any loss in Core's dominance would probably only be temporary.
In short, I agree there's probably enough support to active a 2MB fork, but I disagree that Core will lose any significant market share over the long term, even if the 2MB fork creates the longest chain and earns the Bitcoin mantle.
So even a Core fan boy have to agree that Core must fall in line to stay relevant.
A fan boy can fantasize everyone flocking back to Core after they lose the first to market advantage.
But the key is even if Core decide to fall in line to stay relevant, they can no longer play god like before.
So what's your point.
Not to dispute what you had wrote, but memory is faster than HDD and using memory to store temporary data is faster than using HDD.