Author

Topic: The assumption that mining can be funded via a block size limit (Read 1010 times)

legendary
Activity: 2940
Merit: 1090
If use increases and node propagation speed suffers, even a little bit then the max block size is obviously too high, and there should be mechanisms to adjust the max block size down in those circumstances. This would provide incentives for more full nodes to appear, in order to increase network propagation health, THEN allow the network to increase the max blocksize when the network demonstrates it can handle larger blocks. An arbitrary raising can not anticipate future block propagation health on the network. It has to be dynamic, and it needs a regular metric to adjust the max block size, just like difficulty, otherwise the network may fail in an intractable circumstance. Eg. nodes come and go, for a while there may be powerful nodes that mean propagation of blocks are easy, the limit is raised, re-hardcoded as 5 meg/block. But then suddenly  some month later after the network switches to larger blocks, those nodes switch off, and a 5mb block becomes a real strain on the network. Having mechanisms to adjust the max block size up and down would be critical for the health of the network.

I honestly think that this can be achieved in concert with difficulty adjustments and only conservatively adjusting the max block size when the majority of nodes are found to be using all the currently available blockspace.

I still think no going back down. It is a technical limit not an economic one. If economic failure causes it not to be economical to provision nodes technically capable of operating the protocol that should provide incentives for more full nodes to appear, as more proper full nodes - ones that actually are able to operate within the protocol limits. Their appearance should be an alternative to adjusting the max block size down, not a consequence of having turned it down.

They aren't stupid, so it should not be "turn it down and they will come" but, rather, "they will come if they are needed".

That is kind of the flip side of turning it up not being "turn it up and they will come'" but, rather, "they will come if they are needed, whereupon turning it up will be safe/secure to do" or "they have come, time to turn it up".

That last one though - "they have come, time to turn it up" - is worrying. Two ways around. One, the risk of over-estimating how large of a max size the installed base can actually already handle. Two, the bait and switch fly by night nodes you mentioned, that lure us into higher than we can handle without them.

I do not want someone to [be able to] spend a few billion on super-datacentres that convince us to let them drive all median/norm/average nodes out of business even if they do not then fly by night but whether they drive the previous median/norm/average to uprade or to go out of business or whatever I do not want them so big that their flying by night even matters.

Thus the idea of setting the max block size not by what the max ubermultinational or max ubernation does or could deploy nor by what outliers on the uber side of the mode/median/mean can handle but, rather, by what the vast swarms of independent ordinary typical full nodes already have in place.

If there are not vast swarms of independent ordinary typical full nodes, something probably is already going wrong.

Uptake (adoption) might even be best measured by rate of increase in sheer number of independent ordinary typical nodes.

-MarkM-
member
Activity: 113
Merit: 11
I find the argument that scarcity of blockspace is the only way of keeping miners is a straw man argument directly transferred from the idea that sound money must be scarce in order to retain it's value.

Ensuring the long term security of the blockchain can, and should, be based on fee discovery in a free market. Scarcity of blockspace should have nothing to do with it, blockspace size is a network health issue, NOT an economic driver issue. Miners will determine the minimum level of fees they need to include in a block to make the mining worthwhile and users should still have the flexibility of testing the market's competitiveness for what fees is below the the miner's price point without being thumbscrewed by an arbitrary, hard limit on the blockspace, which miners can exploit to force users to adopt ever higher fees as use of Bitcoin increases.

A free market on fees where both miners and users negotiate price is far more desirable than miners essentially auctioning space to the highest bidder.

The whole idea that the max blocksize should have some kind of economic drive to it is a bit silly. The negotiation of blocksize should never be raised for economic reasons, it should only be raised (and lowered) because the network of nodes that support and maintain the blockchain has the capacity to do so. If use increases and node propagation speed suffers, even a little bit then the max block size is obviously too high, and there should be mechanisms to adjust the max block size down in those circumstances. This would provide incentives for more full nodes to appear, in order to increase network propagation health, THEN allow the network to increase the max blocksize when the network demonstrates it can handle larger blocks. An arbitrary raising can not anticipate future block propagation health on the network. It has to be dynamic, and it needs a regular metric to adjust the max block size, just like difficulty, otherwise the network may fail in an intractable circumstance. Eg. nodes come and go, for a while there may be powerful nodes that mean propagation of blocks are easy, the limit is raised, re-hardcoded as 5 meg/block. But then suddenly  some month later after the network switches to larger blocks, those nodes switch off, and a 5mb block becomes a real strain on the network. Having mechanisms to adjust the max block size up and down would be critical for the health of the network.

I honestly think that this can be achieved in concert with difficulty adjustments and only conservatively adjusting the max block size when the majority of nodes are found to be using all the currently available blockspace.

legendary
Activity: 2940
Merit: 1090
It is also, *maybe*, backwards-compatible with the subsidy halving we already had.

That is, the existing blocks already in the chain have not violated it yet even if back last december (we belatedly realise upon looking at the new code), the block size limit already doubled, according to the (new) protocol, but we hadn't gotten that fact into the code back then.

(The *maybe* is a caveat that we don't know yet if we can actually handle one megabyte blocks so for all we know two megabyte blocks might not be feasible yet.)

-MarkM-
sr. member
Activity: 310
Merit: 250
Multiplying by two every time we halve the block subsidy would be pretty bulletproof, as we'd have four years to prepare for each upcoming change.

We'd also have four years to figure out whether that was far too optimistic (increasing too fast) and would have to be slowed down.

So I think merely just a fixed constant is not likely at all.

The main thing might be sheer predictability, so people can do their five year, ten year, and twenty-year plans, or something like that.

Thing is, people don't think that specific rate I cited above would be fast enough.

They hope for exponential uptake, but are not even doing tests like, say picking one branch/city store in which to do a market study, seeing how much that affects the syste so as to judge how many more of their branches to to expand the test into, come back when the current limit is full and say hey we brought in all the traffic that is making blocks nicely full now, that was only 1% of our branches world wide, so we now know to start accepting bitcoins at the rest of our branches will cause X amount more traffic on your blockchain, how soon will you be ready for me to bring that new traffic?

We have not seen anything like that, just wild fantasies about the future by people who seem n ot to even have enough customer base to bring in to even fill the blocks we already have.

-MarkM-


This at least it would solve all of the problems of losing the decentralized nature of Bitcoin, fighting exponential growth (and the attack vectors that come with it), and forever relegating Bitcoin to a very limited use. And as you say, its entirely predictable.

This approach is preferable to a single constant increase (requiring additional ones in the future), or a potential gameable automatically adjusting algorithm, and I would be fully in favor of it (doubling max block size whenever subsidy gets cut in half, even if its still "not fast enough")
legendary
Activity: 2940
Merit: 1090
Multiplying by two every time we halve the block subsidy would be pretty bulletproof, as we'd have four years to prepare for each upcoming change.

We'd also have four years to figure out whether that was far too optimistic (increasing too fast) and would have to be slowed down.

So I think merely just a fixed constant is not likely at all.

The main thing might be sheer predictability, so people can do their five year, ten year, and twenty-year plans, or something like that.

Thing is, people don't think that specific rate I cited above would be fast enough.

They hope for exponential uptake, but are not even doing tests like, say picking one branch/city store in which to do a market study, seeing how much that affects the system so as to judge how many more of their branches to expand the test into, coming back when the current limit is full to say hey we brought in all the traffic that is making blocks nicely full now, that was only 1% of our branches world wide, so we now know to start accepting bitcoins at the rest of our branches will cause X amount more traffic on your blockchain, how soon will you be ready for me to bring that new traffic?

We have not seen anything like that, just wild fantasies about the future by people who seem not to even have enough customer base to bring in to even fill the blocks we already have.

-MarkM-
hero member
Activity: 588
Merit: 500
I do not think the hard coded max block size is about deliberately creating a limited resource at all.

To me it is about things like array overflows, stack overflows, heap overflows, bandwidth flooding, requiring more than ten minutes to process a block, stuff like that.

Basically no matter what hardware and bandwidth is up and running ready to process blocks it has some limit, and the network fails if that limit is exceeded, so to prevent hackers / attackers from using massively larger blocks to exploit the machines running the nodes, or to force all nodes other than their special beowulf cluster designed specifically for the purse of generating and sending terrabytre blocks or petabyte blocks or whatever from blowing things up there is a HARD limit.

I think that limit should be what the existing 24/7 notes can currently handle, and that should already be the case because if they could not handle it they should have been knocked off the net by now if there is in fact a need to increase that HARD limit.

It is basically a protocol limit; if protocol calls for 8 bits per byte, then manufacturers need to build CPUs that can handle 8 bit bytes. If protocol calls for 1 megabyte blocks, then... Except we did it backwards: first the manufacturers manufactured machines that could handle one megabyte blocks, all the nodes deployed such machines, then we set the limit based on the actually out there already paid for installations the protocol had to be designed to fit on.

If the limit is going to be two megabytes, all nodes need to be upgraded ready to handle that new protocol limit.

It is maybe not quite as nontrivial a thing to change as upgrading from 64 bit systems to 128 but systems, but doubling the max block size and doubling the number of bits systems need to process is maybe not too outrageous an analogy. Twice as many bits will need to be processed. You can process twice as many bits without doubling the size of the registers in the CPUs used from using 64 bit machines to using 128 bit machines but no matter how you slice it doubling the block size means doubling the number of bits that have to be processed per unit of time.

So to talk about multiplying by ten, think about if we decided our next release was going to run on 640-bit machines instead of 64-bit machines.

Its not as bad as that, fortunately. Maybe it is more like, to multiply it by two means using sixteen-core machines instead of 8-core machines.

We ideally want to use commodity machines though, right? As the type that sell the most tend to achieve the lowest price due to not being specialty machines the mass market does not consume many of. Are 16nes what Walmart is now pushing, or are consumers still pretty much stuck at 8 cores if they don't want to pay a premium to get specialty gear?

It has nothing to do with tryinghto limit the number of transactions available to buy, it has to do with ensuring the entire network of already paid for and running nodes can handle the largest possible block any hacker or attacker could possibly construct.

-MarkM-


markm, thank you for the quantity and insightfulness of your posts across many threads on this blocksize topic.

And yet, despite the best efforts of yourself and other sensible people like misterbigg, a nontrivial number of folks still don't seem to grasp that higher transactional capacity will come as a tradeoff against potential new attack vectors.  A tradeoff that should be very carefully researched, tuned, and monitored, and which might be too complex to codify as an automated set-and-forget heuristic. 

The final disjunctive might, unfortunately, turn out to be leaving the constant fixed forever, or periodic human intervention to raise the value.

If we go down the latter route, which seems likely based on sentiment, I just hope that the process never degenerates into a committee of crony appointees meeting every month behind closed doors to decide the optimal interest rate MAX_BOCK_SIZE Wink.
legendary
Activity: 2940
Merit: 1090
The size should be determined by the size of the economy and grow or shrink as demand does. Similar to how the transaction fee cost was reduced when bitcoin price grew.

No, it should never shrink, unless maybe some global catastrophe forced civilisation / the internet backward, crippling it to the point it had to backtrack to some ancient lower tech era. It would be like being forced back to 8 bit machines, or forced back to 16 bit machines, or not being able to use 64 bit machines anymore.

You seem to be thinking this is about what size blocks miners should actually make.

The hard limit on block size is about what size blocks an attacker could use to attack the system with.

If the system can handle an attacker pouring out one-megabyte blocks 24/7 back to back, then okay, one megabyte limit is not too large.

But we have not yet actually proven we really can handle back to back one-megabyte blocks!

-MarkM-
legendary
Activity: 2940
Merit: 1090
sure the size should be what the nodes can currently handle. But if in the future an increase in the size is needed it means a hard fork... that's a problem because that creates inflexibility right?

It creates invulnerability. There is currently no limit to the size of block an attacker can harmlessly spew out other than the DDoS limits, that is, the amount of data that would serve as a viable attack on an internet-connected system without even having to try to pretend it was a block.

(That is, it would work better to have a million zombie machines in a botnet just do normal DDoS attack right now than to try to get the network to accept and process a billion-petabyte block.)

Hardforks themselves, in and of themselves, are not inflexible, people are.

We need the network to be inflexible against attack. We don't want "sometimes we will let an attack succeed, sometimes we won't, we'll see how it goes, who knows, maybe there won't even be any attacks". So we have an absolute rule. Sorry, no attacks of that kind. Sheer block size, not matter how huge, we are totally invulnerable to. Try some other attack maybe but that one we are invulnerable to.

Hard forks are not really all that difficult, in some ways the hard part is preventing hard forks from being too darn easy!

-MarkM-
full member
Activity: 238
Merit: 100
The size should be determined by the size of the economy and grow or shrink as demand does. Similar to how the transaction fee cost was reduced when bitcoin price grew.
sr. member
Activity: 294
Merit: 250
sure the size should be what the nodes can currently handle. But if in the future an increase in the size is needed it means a hard fork... that's a problem because that creates inflexibility right?
legendary
Activity: 2940
Merit: 1090
I do not think the hard coded max block size is about deliberately creating a limited resource at all.

To me it is about things like array overflows, stack overflows, heap overflows, bandwidth flooding, requiring more than ten minutes to process a block, stuff like that.

Basically no matter what hardware and bandwidth is up and running ready to process blocks it has some limit, and the network fails if that limit is exceeded, so to prevent hackers / attackers from using massively larger blocks to exploit the machines running the nodes, or to force all nodes other than their special beowulf cluster designed specifically for the purpose of generating and sending terrabytre blocks or petabyte blocks or whatever from blowing things up there is a HARD limit.

I think that limit should be what the existing 24/7 notes can currently handle, and that should already be the case because if they could not handle it they should have been knocked off the net by now if there is in fact a need to increase that HARD limit.

It is basically a protocol limit; if protocol calls for 8 bits per byte, then manufacturers need to build CPUs that can handle 8 bit bytes. If protocol calls for 1 megabyte blocks, then... Except we did it backwards: first the manufacturers manufactured machines that could handle one megabyte blocks, all the nodes deployed such machines, then we set the limit based on the actually out there already paid for installations the protocol had to be designed to fit on.

If the limit is going to be two megabytes, all nodes need to be upgraded ready to handle that new protocol limit.

It is maybe not quite as nontrivial a thing to change as upgrading from 64 bit systems to 128 bit systems, but doubling the max block size and doubling the number of bits systems need to process is maybe not too outrageous an analogy. Twice as many bits will need to be processed. You can process twice as many bits without doubling the size of the registers in the CPUs used from using 64 bit machines to using 128 bit machines but no matter how you slice it doubling the block size means doubling the number of bits that have to be processed per unit of time.

So to talk about multiplying by ten, think about if we decided our next release was going to run on 640-bit machines instead of 64-bit machines.

Its not as bad as that, fortunately. Maybe it is more like, to multiply it by two means using sixteen-core machines instead of 8-core machines.

We ideally want to use commodity machines though, right? As the type that sell the most tend to achieve the lowest price due to not being specialty machines the mass market does not consume many of. Are 16 core machnes what Walmart is now pushing, or are consumers still pretty much stuck at 8 cores if they don't want to pay a premium to get specialty gear?

It has nothing to do with trying to limit the number of transactions available to buy, it has to do with ensuring the entire network of already paid for and running nodes can handle the largest possible block any hacker or attacker could possibly construct.

-MarkM-
sr. member
Activity: 294
Merit: 250
I may be understanding things wrong but I see it like this....

Let's say the "limited resource" of a fixed block size drives prices up. Sounds good for miners because of huge fees right? But....... in any financial system... isn't the greatest amount of transactions in the lower range? So in other words if all the sudden you had this high transaction cost... it would correspond with moving high amounts of money. But if what makes up the bulk of a financial systems are smaller amounts... you could be pushing the vast majority of transactions off the network. If that happens... then all those transactions.. even though they don't involve high amounts of money.. they won't be adding to whatever total amount is gained from transaction fees. So doesn't that mean that the total hashing power is reduced? If you could prove that miners would lose revenue in total from having a majority of lower amount transactions done off the network wouldn't you also prove that it would reduce hashing power since hashing power is related to reward?

legendary
Activity: 2618
Merit: 1007
My argument that even with an unlimited block size there would be an equilibrium (= a point where even transactions with fees might get rejected by miners because it costs them more to include these than to ignore them) goes like this:

The longer it takes you to get out a block to at least 50% in hash rate of the other miners, the higher the risk of stale blocks.

Probably miners will need to have a certain total fee in their blocks to operate (e.g. 1 BTC). With the lowest fees possible (1 Satoshi flat) and an average transaction size of 0.5 kB this means blocks would be around 50 GB in size. However it might make more sense to ensure that blocks don't get much bigger than 100 MB, otherwise you risk an orphaned block and have to take this into consideration (e.g. with 10% orphaned blocks you need 1.1 BTC in fees per block). The chance for orphaned blocks increases probably not linearly (Oh Meni, where art thou? Wink) so there will be an ideal block size for any desired total fee amount, even if it is not limited at all.

The main factors that play into this are eventual optimizations in announcing new blocks (e.g. sending transaction hashes first and then only the transactions some other miner didn't know about) and general bandwidth + network latency between miners.

In another thread (https://bitcointalksearch.org/topic/how-a-floating-blocksize-limit-inevitably-leads-towards-centralization-144895) it was already feared that this can create situations where it makes more sense in sending your blocks only to other miners that have a high hash rate and are quickly reachable which means you encourage more and more centralization (--> lower latency and much higher bandwidth if you send between two pools in the same data center) and could drive out small distant miners.
sr. member
Activity: 461
Merit: 251
I posted the following bit in a comment in one of the many recent block size limit threads, but it's really a separate topic.

It also strikes me as unlikely that a block size limit would actually achieve an optimal amount of hashing power.  Even in the case where most users have been driven off the blockchain - and some off of Bitcoin entirely - why should it?  Why shouldn't we just expect Ripple-like trust networks to form between the Chaum banks, and blockchain clearing to happen infrequently enough so as to provide an inconsequential amount of fees to miners?  What if no matter what kind of fake scarcity is built into the blockchain, transaction fees are driven somewhere around the marginal transaction fee of all possible alternatives?

This assumption is at the a crux of the argument to keep a block size limit, and everyone seems to have just assumed it is correct, or at least left it unchallenged (sorry if I missed something).
Jump to: