Pages:
Author

Topic: Funding of network security with infinite block sizes - page 4. (Read 24528 times)

legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
This argument doesn't work, you're working backwards from the premise that miners need to maintain their current income to be profitable.

The argument is attempting to determine when the fees market becomes functional.
There is an argument that this is worrying about nothing because the block reward will maintain the network for many years without significant fees. Based upon the high fx rate this argument looks better by the day! Trying to force fees to match the block reward might be a task for the next decade, and counterproductive today. The risk remains from dead puppy transaction sources which would need to be throttled directly somehow, as the fees market won't do it.
full member
Activity: 154
Merit: 100
Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

(side-stepping justusranvier's points, right now, but accepting the above statement)
So what block size, realistically, allows a fees-market to function?

It can be approximated fairly easily if we assume that it happens once the fee revenue per block matches or exceeds the block reward. We have another 3.5 years at 25 BTC, so this needs to be the starting point.

How many transactions fit in a block? It varies because they have a variable number of inputs and outputs. Using blockchain.info terminology an average of 600 transactions populate an average 250KB block, so 2,400 will fit in a 1MB block. Perhaps most are "vanilla" Bitcoin transactions having a few inputs and one output.

What is a sensible fee for a vanilla transaction in the market-place? I had considered for a while that it is closer to the BTC equivalent of 5c than 0.5c or 50c. So 2,400 transactions will accrue $120 in fees.

With the Bitcoin fx rate at $150 then the block reward is $3,750, which is (a rounded) 30MB block size before the fees market functions properly. A few weeks ago this was 10MB. Perhaps by the end of the year it will be anywhere between 10 and 100MB.

These are quite large blocks so perhaps a realistic fee would be more like 20c per transaction, reducing the required block size to the range 2.5MB to 25MB. The market will find the optimum if it is given a chance.

Under no scenario will a 1MB limit give a chance for the fees market to become established, unless transaction fees are forced up to average $1.50, or we wait 11 years until the block reward is 6.25 BTC, or the fx rate collapses back to something like $5 per BTC. None are desirable options.



This argument doesn't work, you're working backwards from the premise that miners need to maintain their current income to be profitable.
legendary
Activity: 1400
Merit: 1009
Not to imply that Satoshi's original intentions are binding on the network forever, but before anyone allows themselves to be misled about his intentions they should read what he actually said:

http://www.mail-archive.com/[email protected]/msg09964.html
legendary
Activity: 960
Merit: 1028
Spurn wild goose chases. Seek that which endures.
I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here:

Quote
My off-the-cuff guess (may be wrong) for a solution was:  if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years }  [Other devs comment: too fast!]  That might be too fast, but the point is, not feedback based nor directly miner controlled.
Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead.
It's my impression that the 2009 Satoshi implementation didn't have a block size limit - that it was a later addition to the reference client as a temporary anti-spam measure, which was left in until it became the norm.

Is this impression incorrect?
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

(side-stepping justusranvier's points, right now, but accepting the above statement)
So what block size, realistically, allows a fees-market to function?

It can be approximated fairly easily if we assume that it happens once the fee revenue per block matches or exceeds the block reward. We have another 3.5 years at 25 BTC, so this needs to be the starting point.

How many transactions fit in a block? It varies because they have a variable number of inputs and outputs. Using blockchain.info terminology an average of 600 transactions populate an average 250KB block, so 2,400 will fit in a 1MB block. Perhaps most are "vanilla" Bitcoin transactions having a few inputs and one output.

What is a sensible fee for a vanilla transaction in the market-place? I had considered for a while that it is closer to the BTC equivalent of 5c than 0.5c or 50c. So 2,400 transactions will accrue $120 in fees.

With the Bitcoin fx rate at $150 then the block reward is $3,750, which is (a rounded) 30MB block size before the fees market functions properly. A few weeks ago this was 10MB. Perhaps by the end of the year it will be anywhere between 10 and 100MB.

These are quite large blocks so perhaps a realistic fee would be more like 20c per transaction, reducing the required block size to the range 2.5MB to 25MB. The market will find the optimum if it is given a chance.

Under no scenario will a 1MB limit give a chance for the fees market to become established, unless transaction fees are forced up to average $1.50, or we wait 11 years until the block reward is 6.25 BTC, or the fx rate collapses back to something like $5 per BTC. None are desirable options.

legendary
Activity: 1400
Merit: 1009
Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead.
Appeal to authority: Satoshi didn't mention assurance contracts therefore they can not be part of the economics of the network
Strawman argument: The absence of a specific protocol-defined limit implies infinite block sizes.
False premise: A specific protocol-defined block size limit is required to generate fee revenue.
legendary
Activity: 1596
Merit: 1091
I don't agree with the idea that keeping the 1mb limit is conservative. It's actually a highly radical position. The original vision for Bitcoin was that it supports everyone who wants to use it, that's why the very first discussion of it ever was about scalability and Satoshi answered back with calculations based on VISA traffic levels. The only reason the limit is there at all was to avoid anyone mining huge blocks "before the community was ready for it", in his words, so it was only meant to be temporary.

You continue to repeat this -- but it is only half the story.

Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead.

sr. member
Activity: 310
Merit: 250

EDIT: to be clear no-one, including myself, thinks the blocksize must never change. Rather achieve scalability first through off-chain transactions, and only then do you consider increasing the limit. I made a rough guess myself that it may make sense to raise the blocksize at a market cap of around 1 trillion - still far off in the future. Fees in this scenario would be something like $5 per transaction, or $1billion/year of proof of work security. (not including the inflation subsidy) That's low enough to be affordable for things like payroll, and is still a lot cheaper than international wire transfers. Hopefully at that point Bitcoin will need less security against takedowns by authority, and/or technological improvements make it easier to run nodes.


We won't get to $10 billion, let alone $1 trillion, if it costs $5 to make a bitcoin transaction. I refuse to believe that you truly do not understand this. Bitcoin will not capture enough economy if it is expensive to use because it won't be useful.

Nobody is going to convert their fiat currencies twice in order to use bitcoin as a wire transfer service with a marginal (at best) savings. And they will have to convert back to their fiat currencies because bitcoin is to expensive to spend or to move to an off chain transaction service (to spend there). Expensive is not decentralized.

Go fucking make PeterCoin, give it a 10KB block size, and see how far you get with that horseshit.

legendary
Activity: 1050
Merit: 1002
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.

That gives us pretty much zero information. I'm sure 99% of us "want to raise the block size limit". The question is how. Do we raise it to 2MB or 10MB or infinite? Do we raise it now? If not now when? Do we raise it once? What about dynamically? Dynamically using data or preset parameters? Do we consider hard fork risks in the decision?

There are many ways to raise the limit and all have different ramifications. No matter the precise course of action someone will be dissatisfied.

Actually, what Gavin said, quoting directly, is this:

A hard fork won't happen unless the vast super-majority of miners support it.

E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445

Quote
Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)

Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.


I think this shows great consideration and judgement because I note and emphasize the following:

I think Jeff Garzik's post on the issue is apropos, particularly his last point:

Thanks for that link. I hadn't seen that post and I think it's brilliant. It probably aligns with my views 99.999%. Ironically it's his last point I disagree with most:

Quote
Just The Thing people are talking about right now, and largely much ado about nothing.

I completely disagree. Think how easily this issue could have been solved if in 2009 Satoshi implemented a rule such as Jeff suggests here:

Quote
My off-the-cuff guess (may be wrong) for a solution was:  if (todays_date > SOME_FUTURE_DATE) { MAX_BLOCK_SIZE *= 2, every 1 years }  [Other devs comment: too fast!]  That might be too fast, but the point is, not feedback based nor directly miner controlled.

I think the above could be a great solution (though I tend to agree it might be too fast). However, implementing it now will meet resistance from someone feeling it misses their views. If Satoshi had implemented it then it wouldn't be an issue now. We would simply be dealing with it and the market working around it. Now however there is a lot of money tied up in protocol changes and many more views about what should or shouldn't be done. That will only increase, meaning the economic/financial damage possible from ungraceful changes increases as well.

I also note early in Jeff's post he says he reversed his earlier stance, my point here being people are not infallible. I actually agree with his updated views, but what if they too are wrong? Who is to say? So the same could apply to Gavin. That's why I think it's wise he appears to include a response from the market in any change, and no change is the default.
sr. member
Activity: 310
Merit: 250
.

The idea that Bitcoin must be crippled to 1 MB blocksizes forever is absurd and perverse and I'm extremely thankful that people smarter than I am agree with that stance. 

If people like you and Gavin weren't working on Bitcoin (for example, if it was run by Peter), I would be getting as far away from it as I could.
legendary
Activity: 1232
Merit: 1084
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.

I'd say ultimately it's the main services (MtGox, Bitpay, BitInstant, BitcoinStore, WalletBit, Silk Road etc) that will decide. If they all stay at the same side of the fork, that will likely be the side that "wins", regardless of Gavin or even miners' will (most miners would just follow the money).

Doubtful, in practice the miners will decide.  However, a "suggestion" by Gavin (and more to the point an update to the reference client) would be a very strong push, as would a suggestion by large users of bitcoin.

In fact, have any pool owners stated what their opinion is?
legendary
Activity: 1120
Merit: 1149
Of course that services have a strong interest in staying on the branch that's more professionally supported by developers, so yeah, if most of the core team goes to one side, we could predict most of these services would too.

FWIW currently the majority of the core team members, Gregory Maxwell, Jeff Garzik and Pieter Wuille, have all stated they are against increasing the blocksize as the solution to the scalability problem. Each has different opinions and degrees of course on exactly what that position constitutes, but ultimately all of them believe off-chain transactions need to be the primary way to make Bitcoin scale.

EDIT: to be clear no-one, including myself, thinks the blocksize must never change. Rather achieve scalability first through off-chain transactions, and only then do you consider increasing the limit. I made a rough guess myself that it may make sense to raise the blocksize at a market cap of around 1 trillion - still far off in the future. Fees in this scenario would be something like $5 per transaction, or $1billion/year of proof of work security. (not including the inflation subsidy) That's low enough to be affordable for things like payroll, and is still a lot cheaper than international wire transfers. Hopefully at that point Bitcoin will need less security against takedowns by authority, and/or technological improvements make it easier to run nodes.


As far as I know Wladimir J. van der Laan and Nils Schneider haven't stated an opinion, leaving Gavin Andresen.

I think Jeff Garzik's post on the issue is apropos, particularly his last point:

Quote
That was more than I intended to type, about block size. It seems more like The Question Of The Moment on the web, than a real engineering need. Just The Thing people are talking about right now, and largely much ado about nothing.

The worst that can happen if the 1MB limit stays is growth gets slowed for awhile. In the grand scheme of things that's a manageable problem.
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
Thanks Peter for the detailed explanation of your position. I do understand the thrust of your arguments but disagree over a number of areas...

There isn't going to be a single service that does this, that's my whole point: if you achieve scalability by just raising the blocksize, you wind up with all your trust is in a tiny number of validating nodes and mining pools. If you achieve scalability through off-chain transactions, you will have many choices and options.
...
On the other hand, if the blocksize is raised and it leads to centralization, Bitcoin as a decentralized currency will be destroyed forever.

I am already concerned about the centralization seen in mining. Only a handful of pools are mining most of the blocks, so decentralization is already being lost there. Work is needed in two areas before the argument for off-chain solutions becomes strong: first blockchain pruning, secondly, initial propagation of headers (presumably with associated utxo) so that hashing can begin immediately while the last block is propagated and its verification done in parallel. These would help greatly to preserve decentralization.

MtGox and other sites are not a good place for people to leave their holdings permanently. As has been pointed out, most people will not run nodes to support the blockchain if their own transactions are forced or priced away from it. Bitcoin cannot be a store of value without being a payment system as well. The two are inseparable.

It might not be sexy and exciting, but like it or not leaving the 1MB limit in place for the foreseeable future is the sane, sober and conservative approach.

Unfortunately, this is the riskiest approach at the present time. The conservative approach is to steadily increase it ahead of demand, which maintains the status quo as much as market forces permit. The dead puppy transaction sources have forced this issue much earlier than would otherwise be the case.

You mention your background in passing, so I will just mention mine. I spent many years at the heart of one of the largest TBTF banks working on its equities proprietary trading system. For a while 1% of the shares traded globally (by value) was our execution flow. On average every three months we encountered limitations of one sort or another (software, hardware, network, satellite systems), yet every one of them was solved by scaling, rewriting or upgrading. We could not stand still as the never-ending arms race for market-share meant that to accept a limitation was to throw in the towel.

The block limit here is typical of default/preset software limits that have to be frequently reviewed, revised or even changed automatically.
The plane that temporarily choked on 700 passengers may now be able to carry 20,000. Bitcoin's capacity while maintaining a desired level of decentralization may be far higher than we think, especially if a lot of companies start to run nodes. It just needs the chance to evidence this.

legendary
Activity: 1106
Merit: 1004
Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.

I'd say ultimately it's the main services (MtGox, Bitpay, BitInstant, BitcoinStore, WalletBit, Silk Road etc) that will decide. If they all stay at the same side of the fork, that will likely be the side that "wins", regardless of Gavin or even miners' will (most miners would just follow the money).
Of course that services have a strong interest in staying on the branch that's more professionally supported by developers, so yeah, if most of the core team goes to one side, we could predict most of these services would too.
legendary
Activity: 1526
Merit: 1129
Most full nodes do not need to store the entire chain. Though it's not implemented yet, block pruning will mean that your disk usage will eventually stabilize at some multiple of transaction traffic. Only a small number of nodes really need to store the entire thing stretching all the way back to 2009.

Anyway, ultimately this will be decided by Gavin and so far he's been saying he wants to raise the block size limit.
legendary
Activity: 1232
Merit: 1084
Increased storage requirements caused by increased transaction demand may reduce the number of home users willing to run full nodes but at the same time this growth can only happen if the number of people using bitcoins is increasing. Perhaps the fraction of home users willing to run a full node is decreasing by a certain percentage, but it's not at all obvious this percentage will be larger than the percentage increase of the user base as a whole.

Some kind of distributed verification of the block chain would be a potential way to get around the size problems.  When you connect a node, you could say how much you are willing to verify and how much hard disk space you will allocate to bitcoin.  You would then only check some of the information.

This requires that the protocol be modified slightly so that a node can broadcast proof that a block is invalid and then all nodes that receive that proof will discard the block.

There could also be distribution of the storage.  Making sure no info is lost is a potential weakness, but as long as there is enough overlap that should be unlikely.  Also, there would still likely be full nodes which would store everything.

Also, proving a block is invalid sometimes can't be done if info is withheld.  You can't prove a block is invalid, if you don't have some of the transactions referenced by the block.  There would also need to be some system for broadcasting something like "tx with hash = does not exist".  This would be proof that the block in question is not valid.  It isn't clear how to prevent spamming of such messages.
legendary
Activity: 1120
Merit: 1149
Well, one thing seems clear to me by now. No amount of continued arguments will change entrenched views here. So, on to the next question. What do we actually do?

In negotiations where parties are far apart and demonstrably unwilling to move the only solution AFAIK is one where nobody gets what they want entirely, but instead uses something all can live with.

In my opinion that would be a change that appears most "safe". That seems like raising the limit by some safe appearing amount and seeing how things go. I estimate that would be raising the limit to something like 5-10MB.

Raising the limit as the first response to scalability problems sets a precedent that the limit will be simply raised again and again as Bitcoin grows. Why spend the effort solving the problem, when you can simply accept less security and punt the issue another year into the future? Fast growing internet startups aren't exactly known for their long-term planning.

I think it's pretty clear who would win this battle if it came down to a fork.  The one with cheap payments.

Bitcoin as a payment system is interesting in that as it becomes easier and faster to complete the fiat->Bitcoin->fiat loop required to make a payment, the economic influence of that use becomes less and less important, all things being equal. The reason is simple: the faster you can complete that loop, the fewer Bitcoins are tied up making payments, and thus the demand for Bitcoins for that application goes down. Similarly those users care less about what the value of Bitcoin is at any given moment.

Conversely your investors, the people holding Bitcoins who believe they will maintain their value in the long run, perform far fewer transactions, yet constitute the economic majority and for now are the people paying for the security of Bitcoin. (via the still large inflation subsidy) This group has every reason to oppose changes that will sacrifice the security of their investment just so people can make cheap transactions.
sr. member
Activity: 434
Merit: 250
I think it's pretty clear who would win this battle if it came down to a fork.  The one with cheap payments.

I'm not so convinced... What if someone introduced inflation instead
of fees? Transactions would be "free", but I'm pretty darn sure the
resulting cryptocurrency would be immediately discarded.

Rather, the one which would win the battle is the one that could
preserve the value. So yeah, I hear you Mike when you say that Bitcoin
has value because it's useful... but it also has value because it has
the potential to maintain it. Bitcoin became money because it had all
the required characteristics to be so, not just 'some' of them.

The biggest selling point of Bitcoin for everyone is the capped 21
millions. If that's not telling something, I don't know what will. The
only ones for which the BTC price is irrelevant are those who
immediately transfer to another asset, like... oh I don't know... a
payment processing service? *Wink to bit-pay*

Anyhow...
I'm still trying to wrap my head about the assurance contracts, but I
don't really see it. Need more IQ points perhaps?

Assuming there's no need to limit the block size or the bandwidth, I
don't really see why/how one could pay for THash. You pay to be
included in a block, miners compete to get the fees, a hashing war
follows... and that's it.

Isn't it always more profitable for a potential attacker to just mine
with the rest, instead of waiting in the shadows with an idle
supercomputer to occasionally reverse a transaction 1-2 blocks ago?
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Is the size of the network the number of nodes or the number of transactions? What is the stated goal here, maximizing transactions or maximizing network nodes?

Repeating what was already said once again: you can safely transact without being a full node. On the other hand, what's the point in being a full node if you can't even transact since there's no more room in the blockchain for your transactions?

Well that's disingenuous, there is always room in the blockchain (up to 1MByte per block at present), it is just the price to get in the blockchain that is at issue here. As I've tried to make clear on numerous occasions, above in this thread also, you cannot simply divorce the discussions on block size limits from fees as simply as you are wont to do here.

The whole discussion is about who is going to be paying for the N*max_block_size*365*144 Mbytes annual global storage requirement for the blockchain (N - number of full network nodes), trying to  block one's ears to discussions of fees is ignoring half the argument. Shall we have some quantification of optimal size of N for those who seem to be saying it is a number that can be discounted?
legendary
Activity: 1106
Merit: 1004
Is the size of the network the number of nodes or the number of transactions? What is the stated goal here, maximizing transactions or maximizing network nodes?

Repeating what was already said once again: you can safely transact without being a full node. On the other hand, what's the point in being a full node if you can't even transact since there's no more room in the blockchain for your transactions?
Pages:
Jump to: