Changing a few lines of code to remove a temporary limit (1MB) is a tiny change to return to the original bitcoin.
I don't think you have read any source code for any of the proposed hard forks to increase the block size limit. It most certainly is not just "a few lines of code". This is Gavin's original implementation PR that he submitted to Bitcoin Core:
https://github.com/bitcoin/bitcoin/pull/6341. It is most certainly more than just a few lines of code. Why? Because it must include proper deployment of the hard fork and unit tests (tests are always necessary regardless of the change). Furthermore, IIRC that PR did not include anything about fixing the O(n^2) signature validation problem, that needs to be fixed separately with another set of code changes. Lastly, to help reduce the bandwidth, so that people can actually still run full nodes, you need something like XThin or Compact Blocks, which is yet another large code change. Suffice to say, it is most certainly not just "a few lines of code" and a "tiny change".
How in the world did anyone every believe SegWit was a good thing?
Because it is a good thing and it fixes a ton of issues. Segwit fixes malleability issues, which have been a problem in the past when people have maliciously attacked Bitcoin transactions by malleating them. It also fixes the O(n^2) signature validation issue and makes it O(n) which is much much better. It introduces script versioning and allows for further improvements to the scripting system. And of course, it can also help with increasing the number of transaction that will fit into a block.
segwit uses different keypairs, where other older implementations cannot validate signatures of segwit keypairs. segwit keypairs cannot directly move funds back to traditional keypairs without having to spend funds twice to get it back to traditional configuration that can be validated by traditional implementations.
Segwit does not use different keypairs. It still uses the exact same Elliptic Cure Cryptography with the secp256k1 curve. Segwit uses different scripts, which are completely separate from keypairs.
meaning its a headache to transact with others.
In what way? You can still send to traditional outputs. Segwit uses nested outputs so that people can still send to segwit wallets and the receiver can still take advantage of segwit.
segwit makes traditional implementations no longer full validating nodes. and instead just limp wristed relay nodes.
And so did every other soft fork.
200GB is nothing. Super fucking tiny. The total size of the chain isn't at all important. You only download it once. You can buy 10 terabyte for very cheap. So, 200GB is laughably small.
The real issue is passing 8MB around to all the nodes every ten minutes. Some effects occur there. No big deal. Internet is freaking fast and getting freaking faster. Netflix bandwidth load is >>>>>>> than bitcoin with 8MB.
If we want people to be able to run a node behind a 1200 baud modem, then 8MB is problematic. If we abandon those having 1200 baud and less, then the only reason to keep 1MB is to drive need for Blockstream's bullshit solutions.
8MB blocks are very lightweight for nearly all modern systems.
What about your bandwidth? It's not storage that's the issue, its bandwidth. You have to download the entire blockchain in order to run a full node. And then you have to upload and download all of the blocks. If you have a bandwidth cap (e.g. you have comcast), then you are royally screwed and can't run a full node.
For those of you who claim that Bitcoin Core is "centrally planning" Bitcoin, you should take a look at Bitcoin Classic. Thomas Zander commits directly to the development branch of Classic. He doesn't follow a pull request and code review process like Bitcoin Core does. He is centrally planning the direction of Classic by taking its development directly into his own hands by bypassing code review and just putting his changes into the repo.