http://gavintech.blogspot.com/2015/01/twenty-megabytes-testing-results.htmlTwenty Megabytes testing results
Executive summary: if we increased the maximum block size to 20 megabytes tomorrow, and every single miner decided to start creating 20MB blocks and there was a sudden increase in the number of transactions on the network to fill up those blocks....
... the 0.10.0 version of the reference implementation would run just fine.
You can check my work and get the detailed blow-by-blow in my for-testing-only megablocks branch (see megablocks_notes.txt).
CPU and memory usage scaled up nicely, there were no surprises. Both CPU and memory usage for the 20MB blockchain were within my criteria of "somebody running a decent personal computer on a pretty good home network connection should be able to run a full node."
I did have a surprise syncing a 20-MB chain to a VPS (virtual private server): bigger blocks were four times faster than the small, main-chain blocks. I don't know why; it is possible something else was happening on the VPS machine to affect the results, or maybe disk I/O is better with larger blocks.
So what's next?
Next we need a soft fork to deal with some longstanding technical debt related to the recent OpenSSL-was-willing-to-validate-too-much-stuff problem. Pieter Wuille and Gregory Maxwell have been working through that.
But then we need a concrete proposal for exactly how to increase the size. Here's what I will propose:
Current rules if no consensus as measured by block.nVersion supermajority.
Supermajority defined as: 800 of last 1000 blocks have block.nVersion == 4
Once supermajority attained, block.nVersion < 4 blocks rejected.
After consensus reached: replace MAX_BLOCK_SIZE with a size calculated based on starting at 2^24 bytes (~16.7MB) as of 1 Jan 2015 (block 336,861) and doubling every 6*24*365*2 blocks -- about 40% year-on-year growth. Stopping after 10 doublings.
The perfect exponential function:
size = 2^24 * 2^((blocknumber-336,861)/(6*24*365*2))
... is approximated using 64-bit-integer math as follows:
double_epoch = 6*24*365*2 = 105120
(doublings, remainder) = divmod(blocknumber-336861, double_epoch)
if doublings >= 10 : (doublings, remainder) = (10, 0)
interpolate = floor ((2^24 << doublings) * remainder / double_epoch)
max_block_size = (2^24 << doublings) + interpolate
This is a piecewise linear interpolation between doublings, with maximum allowed size increasing a little bit every block.
I created a spreadsheet and graph of how the maximum size would grow over time.
But... but... WRECK! RUIN! MADNESS!
I'm confident that there are no technical barriers to scaling up-- I've shown that our current code can handle much larger blocks, and assuming that progress in electronics and networking doesn't come to a sudden screeching stop, that our current code running on tomorrow's hardware would be able to handle the growth I'm proposing.
Of course, we won't be running current code on tomorrow's hardware; we'll be running better code. CPU usage should go down by a factor of about eight in the next release when we switch to Pieter's libsecp256k1 library for validating transactions. Network usage should get cut in half as soon as we stop doing the simplest thing and re-broadcasting transactions twice. And I'm sure all the smart engineers working on Bitcoin and Bitcoin-related projects will find all sorts of ways to optimize the software.
And yes, that includes making the initial block downloading process take minutes instead of days.
So that leaves economic arguments-- most of which I think I addressed in my Blocksize Economics post.
I'll try to restate a point from that post that it seems some people are missing: you can't maximize the total price paid for something by simply limiting the supply of that something, especially if there are substitute goods available to which people can switch.
People want to maximize the price paid to miners as fees when the block reward drops to zero-- or, at least, have some assurance that there is enough diverse mining to protect the chain against potential attackers.
And people believe the way to accomplish that is to artificially limit the number of transactions below the technical capabilities of the network.
But production quotas don't work. Limit the number of transactions that can happen on the Bitcoin blockchain, and instead of paying higher fees people will perform their transactions somewhere else. I have no idea whether that would be Western Union, an alt-coin, a sidechain, or good old fashioned SWIFT wire transfers, but I do know that nobody besides a central government can force people to use product with higher costs, if there is a lower-cost option available.
So how will blockchain security get paid for in the future?
I honestly don't know. I think it is possible blocks containing tens of thousands of transactions, each paying a few millibits in fees (maybe because wallets round up change amounts to avoid creating dust and improve privacy) will be enough to secure the chain.
It is also possible big merchants and exchanges, who have a collective interest in a secure, well-functioning blockchain, will get together and establish assurance contracts to reward honest miners.
I'm confident that if the Bitcoin system is valuable, then the participants in that market will make sure it keeps functioning securely and reliably.
And I'm very confident that the best way to make Bitcoin more valuable is to make it work well for both large and small value transactions.
Posted 20th January by Gavin Andresen