The question is absolutely not "can more than 2mb can be uploaded?" The question is not, is it possible to run a node? The question is, at what point do bandwidth limitations disincentivize the operation of full nodes
The answer to that is 'at any block size whatsoever, including as little as one transaction per 10 minutes'. There is absolutely no realizable direct positive incentive for any other then a miner, merchant, or exchange to operate a node. None. There is no renumeration for so doing. None. Nada. Bupkis.
Incentives to run a node -- there's an elusive concept. Yes, there is no tangible economic benefit, as I've stated myself, hence the importance of curbing the primary disincentive to running a node -- bandwidth requirements.
You seem to be suggesting that miners, merchants, and exchanges are the only entities (or people...) running nodes. Do you have any proof of that? Are you suggesting that the number of miners, merchants, and exchanges was cut in half over time and has been downtrending for years? Because that describes the number of operating nodes.
More importantly, what kind of
p2p network is this that miners, merchants, and exchanges are the only entities we expect to run nodes? Did I misread the whitepaper or are we talking about two different coins? SPV has its place but I don't agree that virtually the entire userbase of bitcoin should trust "miners, merchants, and exchanges" -- no matter how centralized they become -- to validate their transactions. In a sentence, that defeats the purpose of bitcoin.
That canard has no place in a 'core vs classic' comparison.
It does, as one team has just released node software to increase the block size limit. Block size directly impacts the bandwidth needs of nodes -- the only real disincentive to run one. Would you agree that the entire economy depending on a single node for validation endangers security and fungibility? How about 100 nodes? 1000? Where is the limit, and what is your evidence that typical users are safe from, say, Sybil attacks?
Some cap (which conceivably will not be 1MB forever) that guarantees that "some" home nodes can survive keeps the network secure. On a global basis, the 1MB cap already ruled out typical home-grade connections, like 3G/4G, dialup, satellite, DSL. Capped cable has you choosing between running a node and otherwise using your internet connection as you'd intend. Bitcoin shouldn't be limited to the minority with uncapped cable and fiber connections. And we have some reason to believe that many nodes are already running in VPS, so the numbers are already inflated. If we continue with this rhetoric that block size limit is unnecessary, what guarantee is there that anyone will run home nodes? If none, what will protect users from regional feather forks? Sybil attacks? We're just supposed to trust the Coinbases of the world to take care of us? When did that become acceptable for bitcoin -- a p2p network?
Fact is, any fully validating core node with the SegWit Omnibus Changeset implemented will need the signatures in order to validate. The fact that they have been repartitioned into a separate chain means nothing to such a node - it still needs the signatures in order to validate. Accordingly, The SegWit Omnibus Changeset is as big a disincentive to operating a node than is classic.
In a nutshell, and IMO, there are two ways to approach the increased cost (upload bandwidth) of increased throughput, all else equal. We can externalize the cost to
all nodes -- but then we can expect a drop in all nodes. Alternatively, we can distribute the cost to those who are using it (and who can pay for it).
How, specifically, is this as big of a disincentive to operating a node as Classic? You've also stated before that this is a trust issue. Can you provide an estimate of the cost for an attacker to exploit the signature chain, if this is a real threat? If you are suggesting that segwit is a security threat, could you provide examples?
But it's really kind of irrelevant - as has been pointed out, an insignificant proportion of current nodes will stop node-ing simply because the disk space required goes up from $3.09 worth to $6.18 worth, nor if they need to go from 10 MB upload every 10 min to 20 MB upload every 10 minutes.
I don't think anyone here is really arguing about disk space. But could you provide some evidence that "an insignificant proportion of current nodes will stop node-ing" if block size doubles? No data I've seen appears to support that, but maybe you could provide some.
Your numbers also reflect such low maxconnections -- the absolute minimum, verging on harming the network more than helping. If such "leechers" are taking up connections where other nodes with higher maxconnections could make better use of them, they are better off being shut down.
But it is still really kind of irrelevant - the dirty little secret is that independent nodes essentially fulfill zero marginal utility. Sure, every node validates transactions. Guess what - so does every intelligent miner. They would not risk building a block that has a transaction included that would be rejected by the rest of the network.
Intelligent miner =/= honest miner. Non-mining nodes reflect the interests of non-mining users, serving as a check on the power of miners. Non-mining nodes, by not trusting mining nodes and enforcing the protocol are integral to the integrity of the p2p system. Without them, miners have simply replaced central banks -- they may be competing against each other, but they can also collude against the userbase. And if nodes are sufficiently centralized, miners are not the only concern; it becomes much easier to coordinate Sybil attacks.
Your argument places trust in the "intelligence" of miners. Of course an intelligent miner, or a group of colluding miners, would risk publishing potentially invalid blocks, if it were profitable to do so. Miners have carried out attacks before; the only thing they care about is profit. Non-mining nodes are essential to keeping miners honest.
Summary:
1) 'Node Centralization' is no reason to choose between 1MB Core and 2MB other
2) Doubling block size will have negligible impact upon node count
3) In the end, nodes are negligible marginal utility anyhow.
With IBLTs and weak blocks, and after segwit, I'd be more confident in #1. For now, the trend in node health gives cause to be wary of putting any unnecessary pressure on bandwidth. In reality, the biggest reason to choose between 1MB Core and 2MB Classic (that is the only option based on released software) is that a hard fork implemented without miner consensus is very likely to permanently break bitcoin into multiple ledgers. I don't think you've provided a modicum of evidence for #2. Regarding #3, the utility of, and incentive to run[ning] a node are elusive but not non-existent. Non-mining nodes are essential to maintaining the integrity and security of the protocol, and many on this forum including myself and David Rabahy above me, can attest to running nodes because we want the system to succeed and/or are invested in its success. But the more you squeeze node operators, the less of us there will be.