For those that didn’t read the paper (including those spreading misinformation about its contents), here are some important points to note (with relevant quotes from the paper accompanying them):
1) The paper does not explicitly suggest that a 4MB block size is optimal. Rather, it explicitly suggests that block size should not exceed 4MB. That is to say, > 4MB is unsafe. We already know this based on previous testing—hence the 4MB capacity limit for Segwit at a 1MB block size. Greg Maxwell and Mark Friedenbach have spoken about this specifically many times.
2) The results are based on a limited number of possible metrics and the authors suggest that they should be viewed as “upper bounds” (best case scenario) and not necessarily as the real limits of the network, which could be lower. A proper engineering plan should be based on “lower bounds”—never best case scenarios—to ensure maximum inclusivity, connectivity and robustness among peers.
3) In the same vein, the authors only contemplate the capacity for fully validating nodes to operate at scale. Whether operators of fully validating nodes will do so given increased bandwidth requirements is not clear, and requires a thorough game theory analysis. This is a matter of human incentive, not theoretical limits, as decentralization requires humans to make a conscious decision to run a node. We cannot presume that humans will do so if our target for throughput capacity is to use all bandwidth that node operators have access to in order to run a node. Personally, I have a fiber optic home connection with a 250GB monthly cap which costs roughly $100/month—side note: I also think expecting all full node operators to pay $100/month for internet access is excessive. Here is a breakdown of how I currently need to throttle upload bandwidth to handle 1MB blocks, and why 4MB blocks would force me off the network.
And yes, that means that in the adversarial 4MB Segwit case, I should not be running a full time node. I likely won't until further bandwidth-saving measures are implemented...or I may schedule certain parts of the month to run in blocksonly mode. Strict maxuploadtarget set at all times.
As a node operator I can tell you: we will never, ever devote all of our bandwidth resources to running a node. We will shut down our node instead. I believe this is a dangerous assumption for the authors to make:
4) The authors concede that they do not analyze metrics that affect, for example, mining distribution—missing from their analysis is the effect on mining centralization, particularly in China. It is a common misconception that the GFW restricts Chinese miners. Rather, the majority of hash power being inside the GFW means that Chinese miners are at a great advantage. Since the p2p network outside of the GFW is delayed in receiving blocks from Chinese miners, the majority of hash power within the GFW is not, ensuring that Chinese miners are better insulated from orphan risk than the rest of the world. This problem is exacerbated by larger block size, as propagation times inevitably increase across the network. Pieter Wuille’s simulator, without such regional considerations, comes to similar conclusions: larger block sizes mean decreased profitability for smaller miners, adding centralization pressures.
So two questions that the paper does not address: 1) Are we comfortable handing even more mining power to China by increasing relay times in the context of the GFW? And 2) Are we interested in seeing smaller miners/pools remaining viable (miner decentralization), or are we okay with the network being secured by ever-larger cartels of predominantly Chinese miners?
5) The authors conclude that the bitcoin protocol must fundamentally be redesigned for its blockchain to achieve significant scale while retaining decentralization.
6) One of the metrics the authors used to gauge relay limitations was the point at which some proportion of the network could experience denial of service attacks. They set this threshold at 10%, meaning that implementing a 4MB block size based on these results means consenting to 10% of the network being vulnerable to attack. If we want to achieve a higher level of effective throughput (i.e. a lower proportion of vulnerable nodes), we would need to reduce the block size.
-The block size should not exceed 4MB
7) The authors concluded that the Cost per Confirmed Transaction (CPCT) is generally $1.40 - $2.90, and as high as $6.20. Compare this to the expectations that transactions ought to be free or as cheap as a few cents.
There are other lesser issues of note, but the point here is that the conclusions drawn should be viewed in the context of the limits of the study, which openly admits leaving important metrics outside of its scope. I think the reiteration of the following is a monument to the work that Core is actively doing towards scalability: