An important thing to keep in mind when designing large robust systems is that you must always assume that the worst case scenario can and will happen.
Firstly, there is the problem with quadratic sighashing. Increasing the block size to 4 MB means that we would be allowing a theoretical 4 MB transaction which, due to its size, can take a long time to validate. A block was mined a while back that took ~30 seconds to validate since it was just one gigantic 1 MB transaction. Due to quadratic sighashing, a similar 4 MB transaction in a 4 MB block would take 480 seconds to validate since sighashing is quadratic.
Secondly, increasing the block size in general increases the burden on full nodes in terms of bandwidth and disk space. Right now the blockchain is already fairly large and growing at a fairly fast pace. It gains ~1 GB every week or so. Considering the worst case scenario, that would mean that the blockchain would grow at a rate of 4 GB per week. That growth is quite large and hard to sustain. Full nodes need to download that amount of data per week and upload it to multiple peers. That consumes a lot of bandwidth and people will likely stop running full nodes due to the extra cost of that bandwidth. Furthermore, it will become more and more difficult to bring new nodes online since it consumes so much bandwidth and disk space so it is unlikely that people will be starting up new full nodes. Overall, this extra cost is a centralizing pressure and will result in fewer full nodes and an increased burden on those who are currently running full nodes. And larger blocks don't just effect the bandwidth and disk space, they also require more processing power and memory to fully process, so that raises the minimum machine requirements as well.
There was a paper published a year or so ago which analyzed the ability of the network to support certain sized blocks, and I think they concluded that based upon network bandwidth alone, the network might have been able to support 4 MB blocks and still keep most of the full nodes. However they did not consider machine specs and requirements for larger blocks so if you factor those in, the maximum handleable block size is likely smaller.
Lastly, such a change would require a hard fork. Hard forks are hard to coordinate and to get everyone on board and upgrade at the same time. By this time, 2 block size increase hard forks have been attempted, and both have failed. With all of the politics, contention, and toxicity going on right now, I highly doubt that we would be able to get the consensus required to activate such a hard fork. Additionally, planning out, implementing, and testing a safe fork (regardless of hard or soft) takes a long time, so such a fork would not be ready for months, if not a year or more.
Mostly correct, but these problems are solvable.
Flextrans (bunlded in BitcoinClassic) offers an already coded solution to the quadratic hashing. We can also limit the transaction size or number of sigops.
Bandwidth and diskspace requirements would increase naturally, but Gavin did testing on 8mb blocks, and if you think about it, even full 32mb blocks only
represent 1.68 TB a year of storage.
Hard forks require coordination, but many alt coins have successfully hard forked without any major issues that I'm aware of, and I don't think it would require a year.
Even if it was done very abruptly, miners kicked off the main chain unexpectedly could simply rejoin.
OK... I see the quadratic issue seems solvable by part of segwit or Flextrans, so this would seem to at least reasonable argument that this part of the solution should be acceptable to all parties
As to the size issue, well, I feel 4 MB is not that large I mean even I could download that in time from where I am....I guess this argument is one of degree.....and mechanisms to distribute info.....