Out of curiosity, would you be willing to state your personal opinion on under what conditions it would be appropriate and/or beneficial to raise the blocksize limit?
I'd be glad to, and have many times in the past (also on the
more general subject of hardforks), though I don't know that my particular views matter that much. I hope they do not.
What particular concerns, technical or otherwise, do you consider most important when it comes to considering alterations to this particular aspect of the system? Of course, blocksize is only one aspect of capacity (and a horribly inefficient method of scaling capacity), but I'm curious if there are any particular conditions under which you feel raising the blocksize would be the appropriate solution. Is it only a last-resort method of increasing capacity? And when it comes to capacity, to what extent should it be increased? Is, "infinite", capacity even something we should be targetting?
Since we live in a physical world, with computers made of matter and run on energy there will be limits. We should make the system as _efficient_ as possible because, in a physical world, one of the concerns is that there is some inherent trade-off between blockchain load and decentralization. (Note that blockchain load != Bitcoin transactional load, because there are _many_ ways to transact that have reduced blockchain impact.) ... regardless of the capacity level we're at, more efficiency means a better point on that trade-off.
Not screwing up decentralization early in the system's life is a high priority for me: It is utterly integral to the system's value proposition in a way that few other properties are, and every loss of decenteralization we've suffered over the system's life has been used to argue that the next loss isn't a big deal, making progress backwards is hard: The system actually works _better_ in a conventional sense, under typical usage, as it becomes more centralized: Upgrades are faster and easier, total resource costs are lower, behavior is more regular and consistent, some kinds of attacks are less commonly attempted, properties which are socially "desirable" but not rational for participants don't get broken as much, etc. Decentralization is also central to other elements that we must improve to preserve Bitcoin's competitiveness as a worldwide, open and politically neutral money-- such as fungiblity.
But even if we imagine that we got near infinite efficiency, would we want near infinite capacity?
Bitcoin's creator proposed that security would be paid for in the future by users bidding for transaction fees. If there were no limit to the supply of capacity, than even a single miner could always make more money by undercutting the market and clearing it*. Difficulty can adapt down, so low fee income can result in security melting away. This could potentially be avoided by cartel behavior by miners, but having miners collude to censor transactions would be quite harmful... and in the physical world with non-infinite efficiency, avoiding the miners from driving the nodes off the network is already needed. Does that have to work as a static limit? No-- and I think some of the flexcap proposals have promise for the future for addressing this point.
Some have proposed that instead of fees security in the future would be provided by altruistic companies donating to miners. The specific mechanisms proposed didn't make much sense-- e.g. they'd pay attackers just as much as honest miners, but that could be fixed... but the whole model of using altruism to pay for a commons has significant limitations, especially in a global anonymous network. We haven't shown altruism of this type to be successful at funding development or helping to preserve the decentralization of mining (e.g. p2pool). I currently see little reason to believe that this could workable alternative to the idea initially laid of of using fees... of course, if you switch it around from altruism to a mandatory tax, you end up with the inflationary model of some altcoins-- and that probably does "work", but it's not the economic policy we desire (and, in particular, without a trusted external input to control the inflation rate, it's not clear that it would really work for anyone).
So in the long term part of my concern is avoiding the system drifting into a state where we're all forced to choose between inflation or failure (in which case, a bitcoin that works is better than one that doesn't...).
As far as when, I think we should execute the most extreme caution in incompatible changes in general. If it's really safe and needed we can expect to see broad support... it becomes easier to get there when efforts are made to address the risks, e.g. segwit was a much easier sell because it improved scalablity while at the same time increasing capacity. Likewise, I expect successful future increases to come with or after other risk mitigations.
(* we can ignore orphaning effects for four reasons, orphaning increases as a function of transaction load can be ~completely eliminated with relay technology improvements, and if not that by miners centralizing.. and if all a miners income is going to pay for orphaning losses there will be no excess paying for competition thus security, and-- finally-- if transaction fees are mostly uniform, the only disincentive from orphaning comes from the loss of subsidy, which will quickly become inconsequential unless bitcoin is changed to be inflationary.)
It is my understanding that Core's implementation of segwit will also include an overall, "blocksize", increase (between both the witness and transaction blocks), though with a few scalability improvements that should make the increase less demanding on the system (linear verification scaling with segwit and Compactblocks come to mind). Do you personally support this particular instance of increasing the overall, "blocksize"?
I think the capacity increase is risky. The risks are compensated by improvements (both recent ones already done and immediately coming, e.g. libsecp256k1, compactblocks, tens of fold validation speed improvements, smarter relay, better pruning support) along with those in segwit.
I worry a lot that there is a widespread misunderstanding that blocks being "full" is bad-- block access is a priority queue based on feerate-- and at a feerate of ~0 there effectively infinite demand (for highly replicated perpetual storage). I believe that (absent radical new tech that we don't have yet) the system cannot survive as a usefully decentralized system if the response to "full" is to continually increase capacity (such as system would have almost no nodes, and also potentially have no way to pay for security). One of the biggest problems with hardfork proposals was that they directly fed this path-to-failure, and I worry that the segwit capacity increase may contribute to that too... e.g. that we'll temporarily not be "full" and then we'll be hit with piles of constructed "urgent! crash landing!" pressure to increase again to prevent "full" regardless of the costs. E.g. a constant cycle of short term panic about an artificial condition pushing the system away from long term survivability.
But on the balance, I think the risks can be handled and the capacity increase will be useful, and the rest of segwit is a fantastic improvement that will set the stage for more improvements to come. Taking some increase now will also allow us to experience the effects and observe the impacts which might help improve confidence (or direct remediation) needed for future increases.
To be clear, I'm just asking as someone who'd like to hear your informed opinion. In the short-term I'm not exactly worried about transaction capacity--certainly not hardfork worried--as with segwit on the way and the potential for LN or similar mechanisms to follow, short-term capacity issues could well be on their way out (really all I want short-term is an easier way to flag and use RBF so I can fix my fees during unexpected volume spikes). What I'm more curious about at this point are your views on the long-term--the, "
133MB infinite transactions", stage.
I think we don't know exactly how lightning usage patterns will play out, so the resource gains are hard to estimate with any real precision. Right now bidi channels are the only way we know of to get to really high total capacity without custodial entities (which I also think should have a place in the Bitcoin future).
Some of the early resource estimates from lightning have already been potentially made obsolete by new inventions. For example, the lightning paper originally needed the ability to have a high peak blocksize in order to get all the world's transactions into it (though such blocks could be 'costly' for miners to build, like flexcap systems) in order to handle the corner case where huge numbers of channels were uncooperative closed all at once and all had to get their closures in before their timeouts expired. In response to this, I proposed the concept of a sequence lock that stops when the network is beyond capacity ("timestop"); it looks like this should greatly the need for big blocks at a cost of potentially delaying closure when the network is usually loaded; though further design and analysis is needed. I think we've only started exploring the potential design space with channels.
Besides capacity, payment channels (and other tools) provide other features that will be important in bringing our currency to more places-- in particular, "instant payment".
As much as I personally dislike it, other services like credit are very common and highly desired by some markets-- and that is a service that can be offered by other layers as well.
I'm sorry to say that an easy to use fee-bump replacement feature just missed the merge window for Bitcoin Core 0.13. I'm pretty confident that it'll make it into 0.14. I believe Green Address has an feebump available already (but I haven't tried it). 0.13 will have ancestor feerate mining ("child pays for parent") so that is another mechanism that should help unwedge low fee transactions, though not as useful as replacement.
Of course, you're free to keep your opinions to yourself or only answer as much as you're comfortable divulging; it's not my intent to have you say anything that would encourage people to sling even more shit at you than they already have been lately
Ha, that's unavoidable. I just do what I can, and try to remind myself that if no one at all is mad then what I'm doing probably doesn't matter.