gmaxwell makes clear that this subject has been debated in many threads. It keeps getting raised, and the reason is that 18 months have passed since
Jeweller's thread, and there is probably less than 18 months until the block size limit starts crippling transaction flows, just as the 250KB soft-limit did early in March 2013.
Sadly there is still no proposal that I've seen which really closes the loop between user capabilities (esp bandwidth handling, as bandwidth appears to be the 'slowest' of the technology to improve), at best I've seen applying some additional local exponential cap based on historical bandwidth numbers, which seems very shaky since small parameter changes can make the difference between too constrained and completely unconstrained easily. The best proxy I've seen for user choice is protocol rule limits, but those are overly brittle and hard to change.
Satoshi put the 1MB into place nearly 4 years ago, mainly as an anti-spam measure. Now the block limit exists, at the very minimum it should be increasing at the same rate as the average global internet broadband speed.
UK consumer broadband (download) average speed each yearIt is a reasonable assumption that all major countries which host bitcoin nodes have seen a similar growth pattern, and upload speeds also follow the pattern.
So, since the 1MB max block size was acceptable, within the goal of maintaining decentralization, in 2010, then 3MB must be acceptable today.
Large blocks are already being created, as a matter of course, by different miners:
313377 298 3,178.83 BTC 5.9.24.81
731.56313376 1230 3,322.72 BTC Eligius
877.88313375 1447 2,434.47 BTC Unknown with 1AcAj9p Address
731.47313374 1897 5,733.43 BTC GHash.IO
731.61The bare minimum which needs doing is like:
if block height > 330000
maxblocksize = 3MB [and recalculate dependent variables]
Or better still a flexible limit based upon demand. Remember people are paying for their transactions to be processed:
The median size of the a set of the previous blocks.
A set of 2016 blocks is a large number, representative of real bitcoin usage, so a flexible limit determined at each difficulty change makes sense.
The fees market (which is still dsyfunctional) is a lesser concern at the present time.
Bitcoin Core version 0.8 focused on LevelDB, 0.9 on Payment protocol. Version 0.10 really needs to address the block size.
It is crazy to allow the scenario (below) to happen over the 1MB constant when all nodes, not just miners, would be affected:
By default Bitcoin will not created blocks larger than 250kb even though it could do so
without a hard fork. We have now reached this limit. Transactions are stacking up in the memory pool and not getting cleared fast enough.
What this means is, you need to take a decision and do one of these things:
- Start your node with the -blockmaxsize flag set to something higher than 250kb, for example -blockmaxsize=1023000. This will mean you create larger blocks that confirm more transactions. You can also adjust the size of the area in your blocks that is reserved for free transactions with the -blockprioritysize flag.
- Change your nodes code to de-prioritize or ignore transactions you don't care about, for example, Luke-Jr excludes SatoshiDice transactions which makes way for other users.
- Do nothing.
If everyone does nothing, then people will start having to attach higher and higher fees to get into blocks until Bitcoin fees end up being uncompetitive with competing services like PayPal.
If you mine on a pool, ask your pool operator what their policy will be on this, and if you don't like it, switch to a different pool.