Pages:
Author

Topic: Funding of network security with infinite block sizes - page 2. (Read 24568 times)

legendary
Activity: 1120
Merit: 1152
It seems that you can get the same effect without OP_RETURN by having a transaction that pays to OP_TRUE OP_VERIFY.

Actually you just need OP_TRUE as the scriptPubKey, or leave it empty and have the miner provide OP_TRUE in the scriptSig. I mentioned the trade-offs of the approach on IRC a few weeks ago.

Also, what is to stop a miner adding the extra 8 BTC to get it over the limit?  In that case, the block is funded anyway.

Ouch... not much of an assurance contract then is it? It's basically just a donation at random - that's very clever of you. There is of course the risk that your block gets orphaned and another miner takes the fees instead, but that happens something like 1% of the time now, and potentially a lot less in the future. Miners can deliberately orphan the block too, but we really want to implement mechanisms to discourage that behavior for a lot of reasons.

You could use nLockTime: basically you would all participate in authoring an nLockTime'd assurance contract transaction. Some time before the nLockTime deadline approaches, if the assurance contract is *not* fully funded, IE a miner might pull the "self-assure" trick, you double-spend the input that would have gone to the contract so your contribution to it is no longer valid. On the other hand, that means anyone can DoS attack the assurance contract process itself by doing exactly that, and if they time it right, they can still pull off the "self-assure" trick. Risky though - it's hard to reason about what is the correct strategy there, although it's obviously easier to pull off all those attacks if you control a large amount rather than a small amount of the overall hashing power.

With a hardfork we can fix the issue though, by making it possible to write a scriptPubKey that can only be spent by a transaction following a certain template. For instance it could say that prior to block n, the template can only be a mining fee donation of value > x, and after, the spending transaction can be anything at all. It'll be a long time before that feature is implemented though.

In any case all these tricks would benefit an attacker trying to depress the overall network hash rate, either to launch double-spend attacks, attack Bitcoin in general, or just keep funds from flowing to the competition if they are a miner.
full member
Activity: 182
Merit: 100

Larger blocks will potentially cause bandwidth issues when you try to propagate them though, since you have to send all the transactions.

When you propagate them, do you send them to all other miners or only a subset of miners?
legendary
Activity: 1232
Merit: 1094
The wiki page is basically a copy of what I wrote with a few edits to make it sound more wiki-ish.

So, the procedure is basically, create a transaction:

Inputs:
2: + SIGHASH_ANYONECANPAY

Outputs
10: OP_RETURN

This transaction cannot be spent.  However, when performing the authentication, all other inputs are ignored.  This means that if someone modified it to

Inputs:
2: + SIGHASH_ANYONECANPAY
8: + SIGHASH_ANYONECANPAY

Outputs
10: OP_RETURN

It becomes valid.  The tx is validated twice, once with each input and the keys have to all be valid.

It seems that you can get the same effect without OP_RETURN by having a transaction that pays to OP_TRUE OP_VERIFY.

Also, what is to stop a miner adding the extra 8 BTC to get it over the limit?  In that case, the block is funded anyway.

The disadvantage is that the miner must add a 2nd transaction to direct the OP_TRUE output to his actual address.  However, no hard fork is required.
legendary
Activity: 1232
Merit: 1094
Can someone explain why limited blocksizes limit the number of transactions that can be placed? Wouldnt 10 100kb blocks get solved in as much time as 1mb block?

No, a block takes the same time to solve, no matter how many transactions are in it.  You only have to solve the block header which is 80 bytes.  You run the header through the sha-256 hash function twice to see if you "win".  If not, you increment an integer in the header and try again.  Mining is repeating the hash function until you get an integer than "wins".

Basically, once you have generated all the transactions you want to add, you calculate the merkle root.  This is a hash for all the transactions.  You include that number in the header and it is always the same size (32 bytes).

Larger blocks will potentially cause bandwidth issues when you try to propagate them though, since you have to send all the transactions.
full member
Activity: 182
Merit: 100
Can someone explain why limited blocksizes limit the number of transactions that can be placed? Wouldnt 10 100kb blocks get solved in as much time as 1mb block?
legendary
Activity: 1358
Merit: 1003
Ron Gross
The wiki page is basically a copy of what I wrote with a few edits to make it sound more wiki-ish.

Coolness - keep up the good work Mike!
legendary
Activity: 1526
Merit: 1134
The wiki page is basically a copy of what I wrote with a few edits to make it sound more wiki-ish.
legendary
Activity: 1358
Merit: 1003
Ron Gross
Posted the wiki article to reddit.

I haven't read the thread in a while. Can I assume the wiki is relatively up to date, or does it need updating?
legendary
Activity: 1652
Merit: 2301
Chief Scientist
What about directly targeting fees, similar to Gavin's suggestion that blocks must have fee > 50 * (size in MB).
Nah, I now think that's a dumb idea.

Responding to gmaxwell:

RE: burden of unpaid full nodes: for the immediately forseeable future, that burden is on the order of several hundred dollars a year to buy a moderately beefy VPS somewhere.

I understand the "lets engineer for the far future" ... but, frankly, I think too much of that is dumb. Successful projects and products engineer for the next year or two, and re-engineer when they run into issues.

Maybe the answer will be "validation pools" like we have mining pools today, where people cooperate to validate part of the chain (bloom filters, DHTs, mumble mumble....). Maybe hardware will just keep up.

RE: race to the bottom on fees and PoW:

sigh.  Mike explained how that is likely to be avoided. I'm 100% convinced that if users of the network want secure transactions they will find a way to pay for them, whether that is assurance contracts or becoming miners themselves.
hero member
Activity: 991
Merit: 1011
It also guarantees that the miner fees stay between 35 and 65 per block, so keeps the network secure.

it also guarantess that bitcoin always costs more than 8,7%-16,2% of its current market cap in fees. sounds like a solid plan. at 100 billion dollar market cap we need to start thinking about invading some small countries for mining rig storage space though.
legendary
Activity: 1232
Merit: 1094
What about directly targeting fees, similar to Gavin's suggestion that blocks must have fee > 50 * (size in MB).

If the average (or median) block fee (including minting) for the last 2016 blocks is more than 65, then increase the block size by 5%.  If it is less than 35, then drop it by 5%.  This gives a potential increase in block size of 3.5 per year approx.  It also guarantees that the miner fees stay between 35 and 65 per block, so keeps the network secure.

Ofc, if decreased block size causes decreased usage, which causes less tx fees, then you could get a downward spiral, but the limit is a factor of 3.5 per year in either direction.
staff
Activity: 4284
Merit: 8808
Okay, I think this will be my final word on this (if anyone cares or is listening).

Meh.

Any amount of algorithms will just be guesswork that could be wildly wrong— A small error in the exponent makes a big difference: If computer's ability to handle the blocks doubles in 18 months instead of 12 then ten years the size increase will be 1024 : 101— blocks 10x too big to handle.

And "median of blocks" basically amounts to "miner's choose"— if you're going to have miners choose, I'd rather have them insert a value in the coinbase and take the median of that... then at least they're not incentivized to bloat blocks just to up the cap, or forced to forgo fees to their competition just to speak their mind on making it lower. Smiley  But miners choose is ... not great because miners alignment with everyone else is imperfect— and right now the majority of the mining decision is basically controlled by just two people.

There are two major areas of concern from my perspective:  The burden on unpaid full nodes making Bitcoin not practically decentralized if it grows faster than technology, and a race to the bottom on fees and thus PoW difficulty making Bitcoin insecure (or dependent on centralized measures like bank-signed-blocks).  (and then there are then some secondary ones, like big pools using large blocks to force small miners out of business)

If all of the cost of being a miner is in the handling of terabyte blocks and none in the POW, an attacker who can just ignore the block validation can easily reorg the chain. We need robust fees— not just to cover the non-adaptive "true" costs of blocks, but to also cover the POW security which can adapt as low as miners allow it.

The criteria you have only speak to the first of these indirectly— we HOPE computing will follow some exponential curve— but we don't know the exponent.  It doesn't speak to the second at all unless you assume some pretty ugly mining cartel behavior to fix the fee by a majority orphaning valid blocks by a minority (which, even if not too ugly for you on its face would make people have to wait many blocks to be sure the consensus was settled).

If you were to use machine criteria, I'd add a third: It shouldn't grow faster than (some factor of) the difficulty change.  This directly measures the combination of computing efficiency and miner profitability.  Right now difficulty change looks silly, but once we're comfortably settled w/ asics on the latest silicon process it should actually reflect changes in computing... and should at least prevent the worst of the difficulty tends to 1 problem. (though it might not keep up with computing advances and we could still lose security).  Also, difficult as a limit means that miners actually have to _spend_ something to increase it— the median of size thing means that miners have to spend something (lost fees) to _decrease it_...  

Since one of my concerns is that miners might not make enough fees income and the security will drop, making it so miners have to give up more income to 'vote' for the size to go down... is very not good. Making it so that they have to burn more computing cycles to make it go up would, on the other hand, be good.

Code:
every 2016 blocks:

new_max_size =  min( old_size * 2^(time-difference/year),
                                     max(max(100k,old_size/2),median(miner_coinbase_preferences[last 2016 blocks])),
                                     max(1mb,1MB*difficulty/100million),
                                     periodic_hard_limit (e.g. 10Mbytes),  
                                  )
(the 100m number there is completely random, we'll have to see what things look like 8 months once the recent difficulty surge settles some, otherwise I could propose something more complicated than a constant there.)

Mind you— I don't think this is good by itself. But the difficulty based check improves it a lot. If you were to _then_ augment these three rules with a ConsentOfTheGoverned hard upper limit that could be reset periodically if people thought the rules were doing the right thing for the system and decentralization was being upheld...  well, I'd run out of things to complain about, I think. Having those other limits might make agreeing on a particular hard limit easier— e.g. I might be more comfortable with a higher one knowing that there are those other limits keeping it in check. And a upper boundary gives you something to test software against.

I'm generally more of a fan rule by consent— public decisions suck and they're painful, but they actually measure what we need to measure and not some dumb proxies that might create bad outcomes or weird motivations— there is some value which is uncontroversial, and it should go to that level. Above that might be technically okay but controversial, and it shouldn't go there.   If you say that a public set limit can't work— then you're basically saying you want the machine to set behavior against the will and without consent of the users of the system, and I don't think thats right.
 
sr. member
Activity: 294
Merit: 250
Oh goodie, more Google conspiracy theories. Actually I never had to ask for approval to use 20% time on Bitcoin. That's the whole point of the policy - as long as there's some justifiable connection to the business, you can do more or less whatever you want with it and managers can't tell you not to unless it's clearly abusive. That's how we ensure it's usable for radical (i.e. unexpected) innovation.

But even if I was being paid to work on Bitcoin full time by Google, the idea that I'd want Bitcoin to grow and scale up as part of some diabolical corporate master plan is stupid. Occam's Razor, people! The simplest explanation for why I have worked so hard on Bitcoin scalability is that I want it to succeed, according to the original vision laid down by Satoshi. Which did not include arbitrary and pointless limits on its traffic levels.

The idea that Bitcoin can be a store of value with a 1mb block size limit seems like nonsense to me. That's reversing cause and effect. Bitcoin gained value because it was useful, it didn't gain use because it had value - that can't be the case because it started out with a value of zero. So if Bitcoin is deliberately crippled so most people can't use it, it will also cease to have much (if any) value. You can't have one without the other. The best way to ensure Bitcoin is a solid store of value is to ensure it's widely accepted and used on an every day basis.

If Bitcoin was banned in a country then I think it's obvious its value would be close to zero. This is one of the most widely held misconceptions about Bitcoin, that it's somehow immune to state action. A currency is a classic example of network effects, the more people that use it, the more useful it becomes but it goes without saying that you have to actually know other people are using it to be able to use it yourself. If there was immediate and swift punishment of anyone who advertised acceptance of coins or interacted with an exchange, you would find it very hard to trade and coins would be useless/valueless in that jurisdiction.

The reason I'm getting tired of these debates is that I've come to agree with Gavin - there's an agenda at work and the arguments are a result of people working backwards from the conclusion they want to try and find rationales to support it.

Every single serious point made has been dealt with by now. Let's recap:

  • Scalability leads to "centralization". It's impossible to engage in meaningful debate with people like Peter on this because they refuse to get concrete and talk specific numbers for what they'd deem acceptable. But we now know that with simple optimisations that have been prototyped or implemented today, Bitcoin nodes can handle far more traffic than the worlds largest card networks on one single computer, what's more, a computer so ordinary that our very own gmaxwell has several of them in his house. This is amazing - all kinds of individuals can, on their own, afford to run full nodes without any kind of business subsidisation at all, including bandwidth. And it'll be even cheaper tomorrow.
  • Mining can't be anonymous if blocks are large. Firstly, as I already pointed out, if mining is illegal in one place then it'll just migrate to other parts of the world, and if it's illegal everywhere then it's game over and Bitcoin is valueless anyway, so at that point nobody cares anymore. But secondly, this argument is again impossible to really grapple with because it's based on an unsupported axiom: that onion networks can't scale. Nobody has shown this. Nobody has even attempted to show this. Once again, it's an argument reached by working backwards from a desired conclusion.
  • Mining is a public good and without artificial scarcity it won't get funded. This is a good argument but I've shown how alternative funding can be arranged via assurance contracts, with a concrete proposal and examples in the real world of public goods that get funded this way. It'll be years before we get to try this out (unless the value of Bitcoin falls a lot), but so far I haven't seen any serious rebuttals to this argument. The only ones that exist are of the form, "we don't have absolute certainty this will work, so let's not try". But it's not a good point because we have no certainty the proposed alternatives will work either, so they aren't better than what I've proposed.

Are there any others? The amount of time spent addressing all these arguments has been astronomical and at some point, it's got to be enough. If you want to continue to argue for artificial scaling limits, you need to get concrete and provide real numbers and real calculations supporting that position. Otherwise you're just peddling vague fears, uncertainties and doubts.

+1
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Okay, I think this will be my final word on this (if anyone cares or is listening).

Basically it comes down to a philosophical call since the number of variables and unknowns going into the decision lead to an incalculable analysis, at least I don't see any good way to reduce the variable set to get a meaningful result without over-simplifying the problem with speculative, weak assumptions.

In that case, as Gavin is saying, we should just leave it up to the market, our intuitions, our future ingenuity, trust in ability to solve anything that comes up and basically hope for the best.

So now I'm thinking, let the max_block_size float up to any unlimited size. But have a limit on how fast it can rise that makes practical sense, a rate limiter of some sort so it can grow as fast the network grows but not so fast that an attacker could game it over the medium/short term to force smaller users off the network with unrealistic hardware upgrade requirements. Ultimately though have an infinite limit to max block size in the long term, as Mike Hearn says.

As a first blush in pseudo-code;

Code:
if (MAX_BLOCK_SIZE < multiple of the median size of the last 1008 blocks)              ## Gavin A.'s rule to let MAX_BLOCK_SIZE float proportionally to recent median block size history, weekly check

    NEW_MAX_BLOCK_SIZE = multiple of the median size of the last 1008 blocks

    if ( NEW_MAX_BLOCK_SIZE < (2^(1/52))* MAX_BLOCK_SIZE )                             ## Jeff G.'s rule as a weekly time-based rate limit on MAX_BLOCK_SIZE to yearly doubling (Moore's law)

        then {MAX_BLOCK_SIZE = NEW_MAX_BLOCK_SIZE }

    else

        MAX_BLOCK_SIZE =(2^(1/52))* MAX_BLOCK_SIZE

    endif
endif

Infinite max block size growth possible but rate limited to yearly doubling rate.

donator
Activity: 668
Merit: 500
You're working under the assumptions that everyone mining coins want what's best for Bitcoin / care about Bitcoin's future, and all miners are equal - they are not, some will realize that producing large blocks that others can't handle is very much in their interest. Others may have the aim of just causing as many problems as possible because Bitcoin is a threat to them in some way, who knows? All you're doing is creating a vulnerability and expecting no-one to take advantage.

We aren't all hippies living on a fucking rainbow - problems don't just fix themselves because in your one-dimensional view of the world you can't imagine anyone not acting rationally and for the greater good.
You are assuming things are problems with no evidence.  It's quite easy to ignore troublemakers, e.g. the anti-spam rules.  Think outside the box a bit more.  Central planners almost never get anything right, stop wanting others to solve your problems.

Despite all the claims of trouble-making large blocks, the only one we really had over 500k was recently, and the only reason that caused a problem was entirely unrelated, owing to limitations of untested parts of the software, untested so late only because of all these special case rules.  Let miners be free and figure it out for themselves.  I suspect a common core of rule(s) would be generally agreed (anti-spam etc.) and others would all do their own thing on the fringes, which is actually the case now.
full member
Activity: 154
Merit: 100
people dont want one piece of software to occupy whole harddisks
Users have neither the ability nor desire for the blockchain to grow without bound. That's why no hard-coded limit on the block size is necessary.
People tend not to want to murder others or be murdered themselves, yet it still happens and we have laws to prevent / discourage it.

Add my vote too.  Other stances have the premise that block space is valuable and can't be allowed to get too big too fast, but with the contradictory fear that it would be given away for free. And that miners would allow such large blocks to be created that miners wouldn't be able to handle it, which is also a self-correcting "problem".

You're working under the assumptions that everyone mining coins want what's best for Bitcoin / care about Bitcoin's future, and all miners are equal - they are not, some will realize that producing large blocks that others can't handle is very much in their interest. Others may have the aim of just causing as many problems as possible because Bitcoin is a threat to them in some way, who knows? All you're doing is creating a vulnerability and expecting no-one to take advantage.

We aren't all hippies living on a fucking rainbow - problems don't just fix themselves because in your one-dimensional view of the world you can't imagine anyone not acting rationally and for the greater good.
legendary
Activity: 1596
Merit: 1100
Satoshi also intended the subsidy-free, fee-only future to support bitcoin.  He did not describe fancy assurance contracts and infinite block sizes; he cleared indicated that fees would be driven in part by competition for space in the next block.

Unlimited block sizes are also a radical position quite outside whatever was envisioned by the system's creator -- who cleared did think that far ahead.
Appeal to authority: Satoshi didn't mention assurance contracts therefore they can not be part of the economics of the network
Strawman argument: The absence of a specific protocol-defined limit implies infinite block sizes.
False premise: A specific protocol-defined block size limit is required to generate fee revenue.

Gathering data does not imply blindly following the data's source.

Fully understanding the intent of the system's designer WRT fees is a very valuable data point in making a decision on block size limits.

donator
Activity: 668
Merit: 500

I support it too. Letting the market decide the fees and the maximum blocksize is the right answer.

I happen to believe that a limited blocksize will lead to centralization, but for those that think the opposite (that larger blocksizes lead to centralization), I truly wish they would put their time and effort into helping BITCOIN scale. If you are concerned about computing requirements going up for the average user, you can certainly work towards optimizations that will lighten that load, without trying to replace Bitcoin for the majority of what constitutes its present day usage. I think thats probably why Gavin is working on the payment protocol, so that dead puppy "communication" isn't needed.  

All the effort and time spent discussing this issue... imagine if it was actually used productively.

Add my vote too.  Other stances have the premise that block space is valuable and can't be allowed to get too big too fast, but with the contradictory fear that it would be given away for free. And that miners would allow such large blocks to be created that miners wouldn't be able to handle it, which is also a self-correcting "problem".

Block space will always come with a price, it has for 2 years now despite 1MB not being a limiting number. Miners worry about their block propagation and always will have the incentive to be conservative and get their blocks known. See Eligius for example with 32 transactions limit for a long time.

Let the miners determine what works in the free market through trial and error.  This will change with time elastically as computing power and resources change. And that's the way it should be.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
All the effort and time spent discussing this issue... imagine if it was actually used productively.

I know people are tired of the 20+ threads on block size and thousands of posts in them, but this exhaustive debate is very good.

The minutiae of every point of view is being thrashed out in the open, smart ideas and not so smart ideas are put forward and critiqued. What is the alternative? Important decisions taken behind closed doors. Isn't that the default model of existing CBs?

I think that this debate has been productive because a lot of information has been shared between people who care about Bitcoin and want to contribute to help it become successful.

sr. member
Activity: 310
Merit: 250
So the longer I think about the block size issue, the more I'm reminded of this Hayek quote:

Quote
The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.

F.A. Hayek, The Fatal Conceit

We can speculate all we want about what is going to happen in the future, but we don't really know.

So, what should we do if we don't know? My default answer is "do the simplest thing that could possibly work, but make sure there is a Plan B just in case it doesn't work."

In the case of the block size debate, what is the simplest thing that just might possibly work?

That's easy!  Eliminate the block size limit as a network rule entirely, and trust that miners and merchants and users will reject blocks that are "obviously too big." Where what is "obviously too big" will change over time as technology changes.

I support this.

Let's try the simple solution and if it doesn't work, we can try something else.
Bitcoin is all about free market, right ? So let's allow the market to decide for us.

Configuration setting can be made for both client and miner applications, so users will be able to choose.

I support it too. Letting the market decide the fees and the maximum blocksize is the right answer.

I happen to believe that a limited blocksize will lead to centralization, but for those that think the opposite (that larger blocksizes lead to centralization), I truly wish they would put their time and effort into helping BITCOIN scale. If you are concerned about computing requirements going up for the average user, you can certainly work towards optimizations that will lighten that load, without trying to replace Bitcoin for the majority of what constitutes its present day usage. I think thats probably why Gavin is working on the payment protocol, so that dead puppy "communication" isn't needed.  

All the effort and time spent discussing this issue... imagine if it was actually used productively.
Pages:
Jump to: