Another factor is that it's generally dangerous to set bars for participation based on any fraction of current participants.
Imagine, say we all decide 90% of nodes can handle capacity X. So then we run at X, and the weakest 10% drop out. Then, we look again, and apply the same logic (... after all, it was a good enough reason before) and move to Y, knocking out 10%... and so on. The end result of that particular process is loss of decentralization.
Some months back someone was crowing about the mean bandwidth of listening nodes having gone up. But if you broke it down into nodes on big VPS providers (amazon and digital ocean) and everyone else, what you find is that each group's bandwidth didn't change, but the share of nodes on centeralized 'cloud' providers went way up. (probably for a dozen different reasons, -- loss of UPNP, increased resource usage, more spy nodes which tend to be on VPSes...)
Then we have the fact that technology improvements are not necessarily being applied where we need them most-- e.g. a lot of effort is spent making things more portable, lower power consuming and less costly rather than making things faster or higher bandwidth. Similarly, lots of network capacity growth happens in dense/easily covered city areas rather for anyone. In the US in a major city you can often get bidi gigabit internet at personally affordable prices but then 30 miles out spend even more money and get ADSL that barely does 750kbit/sec up. The most common broadband provider in the US usually has plenty of speed but has monthly usage caps that a listening node can use most of... Bitcoin's bandwidth usage doesn't sound like much but when you add in overheads and new peers syncing, and multiply that usage out 24/7 it adds up to more bandwidth than people typically use... and once Bitcoin is using most of a user's resources the costs of using it become a real consideration for some people. This isn't good for the goal of decentralization.