Pages:
Author

Topic: review of proposals for adaptive maximum block size - page 4. (Read 5250 times)

legendary
Activity: 2940
Merit: 1090
(*) up to some reasonable limit that shouldn't ever be reached, which will depend on how widespread bitcoin adoption is - visa handles a peak of about 10000 tx per second, assume normal use is 3000 tx/sec, assume 250 bytes per transaction and you'd need on average 0.5GB sized blocks every 10 minutes. For the immediate future, I think there's no reason the blocksize should increase beyond 10MB at the outside.

Your (*) is what in the code is the hard-coded max block size that all this fuss is about.

-MarkM-
legendary
Activity: 1708
Merit: 1010

So essentially fees have a ceiling, once it's reached miners get more breathing room and fees will drop. Once that ceiling is reached again indicated by the fees collected miners again get more breathing room. If ever there's too much room and fees start getting lower then the space is made scarce in order to encourage higher fees.

What do you think?


While fees have a ceiling, they also have a floor, which I think is too high.  12.5% of the monetary base per year?  For a mature economy that would be way too high.  I would predict that out-of-band methods would undercut the main blockchain for just about everything, functionally reducing the blocksize to under 1MB while users of all size and class desparately attempt to avoid those fees.  On the flip side, this would also make institutional mining (like my example in another thread of Wal-Mart sponsoring mining at a loss for the purpose of processing their own free transactions from customers) the dominate form of security.  I'm not sure if that is good or bad, overall, but I would consider anything over 3% of the monetary base per year to be excessive for any mature economy. Anything else opens up an opprotunity for a cryptocurrency competitor to undercut Bitcoin outright and eat it's lunch.  Keep in mind that the cost overhead of the network functions like a tax, in nearly the same way that inflation of a fiat currency functions like a hidden tax upon the economic base that uses & save in it.  While that's not a perfect comparison in the long run for Bitcoins, it should be evident that our network costs should never exceed 3%, and that a better target would be 1.5% or 2%.  Of course, that is a metric that is relative to both the size of the economy (which we cannot know in advance) and the actual block subsidy (which we can know in advance).

So to modify your proposal, I'd say that until the block subsidy drops down into that 2% range, the range for doubling or halving the blocksize limit should be between the actual subsidy plus 5% and double the subsidy.
sr. member
Activity: 310
Merit: 250

Yeah I know but I at least wanted to see if there's a possible middle ground as opposed to just lifting that limit entirely which I'm absolutely against.


But is anyone actually advocating for that?  Nobody wants a miner to be able to make a 100GB block today full of free transactions. People just want confidence that Bitcoin can be more than a crappy replacement for wire services with a stupidly small potential number of maximum transactions (at costs that eliminate many potential use cases).
legendary
Activity: 1078
Merit: 1003
That was great Hazek. The only problem I see (as raised by others)  is that there is nothing there to "protect decentralization" because as long as the numbers of transactions are continually rising, even at increasing costs per block, so too can block size.

Yeah I know but I at least wanted to see if there's a possible middle ground as opposed to just lifting that limit entirely which I'm absolutely against.

And as you say if enough fees are collected this means that all Bitcoin users are that much richer and can afford more hardware to handle extra storage costs which with this model shouldn't get out of hand at all because it connects what users are willing to pay with how big the blocks can get.
sr. member
Activity: 310
Merit: 250
That was great Hazek. The only problem I see (as raised by others)  is that there is nothing there to "protect decentralization" because as long as the numbers of transactions are continually rising, even at increasing costs per block, so too can block size. The biggest objection I've seen to increasing the block size using an adaptive algorithm like this is that there is the possibility of resource needs increasing to the point of that disenfranchising some portion of Bitcoin users. Personally I don't feel this is a concern because

A) I don't believe it is in Bitcoins interests for the majority of its users to be running full nodes, which is why I point newly interested friends and family members to online wallets
B) I do believe that for even HUGE numbers of transactions (several orders of magnitude larger than now), interested parties with minimal resources could always either have direct or pooled access to full nodes, protecting their interests
C) I believe that Bitcoin will remain as decentralized as it "needs to be", always. This is because those concerned with it becoming too centralized can expend resources (individual or group) to them make it then become less centralized.


The best part of your solution is an implicit agreement with users, which is that if the blocksize and therefore "resources needs" ever increase in the future, so too have the value of Bitcoins. If Bitcoins are worth several thousand USD each, I'm more than happy to purchase many terrabytes of storage to continue acting as a full node regardless of transaction volume and I doubt I'm alone there.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
hazek, I like this very much. As you say, it is impossible to determine the perfect algrithm in advance, but the market will adapt to it to some degree. I think this approach does a lot to secure bitcoin's long-term future.
legendary
Activity: 1078
Merit: 1003
The name of the game is keeping the block space as scarce as possible, encouraging fees that will eventually have to cover the costs of securing the network but not making it too scarce so that Bitcoin can scale batter.

It's impossible to know how many fees must be collected on average in a block because of the fluctuating value of bitcoins. It's impossible to know what block space is scarce enough but not too scarce. The thing is.. it was also impossible to know 50BTC would pay for adequate security, let alone now after the halving that 25BTC would.. These are simply rules that the market took and is now reacting to.

And from this I can conclude that no matter what adaptive algorithm could be picked, all of them will be entirely arbitrary and the market will simply have to adjust to them and make them work. As long as the relationships inside the algo produce the right incentives the market will find an equilibrium and that's the best we can hope for.

Ok so with this out of the way, there are a few predictions that can be made and incorporated into such an algorithm.

1) when the limit is reached, it's highly likely fees will go up but only to a point (an equilibrium will likely be found between the amount of transactions per block and how much the market is willing to pay in fees per transaction) so the block size limit must be increased before we reach that point
2) when the limit is increased, fees will go down to a point until it is reached again then again 1)
3) when the limit is increased in combination with more fees being collected it's highly likely the value of bitcoins has also risen
4) when the value of bitcoins rises, less of them per transaction are required to secure the network
5) users will always try to pay as little in fees as possible


With this in mind we can now build an algo with the right connections that the market can then use and adequately adjust to. Remember this is all arbitrary just like the 50BTC block reward.

The first rule of my proposal is that the block size limit must induce enough fees that cover the security costs before an increase is allowed. Second, with an increase it should now take less fees per transaction to secure the network since an increase likely means higher valued bitcoins. Third if this reverses and fees fall under a certain threshold, the size limit must be reduced and the fee per transaction requirement increased until fees are above the bottom threshold. Fourth this is adjusted in sync with the mining difficulty retarget schedule.

Arbitrarily I'll pick 50BTC to be sufficient to ensure security of the network forever. How much the subsidy decreases so much more fees must be collected before the block size limit can be increased. Eventually fees will have to amount towards the entire 50BTC.

Now for the juicy part, how to relate the block size increases with fees.

Let's say, again arbitrarily, that if subsidy + fees on average in the last evaluation period of 2016 blocks exceeds 50BTC, block size limit is doubled and because more transactions fit into a block the fees per transaction to reach the threshold are now less and inline with the theory that more activity means rising value of bitcoins. And every time subsidy + fees per block on average in the last 2016 falls under 25BTC, the block size limit is divided by 2 to the minimum of 1mb.


So in practice this works out to at max 12,5% of the entire monetary base being spent every year on network security regardless of the value of a single bitcoin which is perfectly reasonable that this is a constant since the more bitcoins are worth, the more it become lucrative to perform an attack the more should be spent on security in terms of value. If we reach the limit right now, when the subsidy is still 25BTC, with max 4200 transactions per block this works out to 0,00595238 BTC fees per transaction before the limit is doubled the first time, perfectly reasonable right now at the current exchange rate. When the limit has been doubled 5 times it will allow max 134400 transactions per block, which if we reach the required 25BTC on average fees per previous 2016 blocks amounts to 0,00018601BTC per transaction, if 1 BTC is $1000 by then, this is still just 20 cents per transaction.. If at any time fees per block on average start getting below 50% of 50BTC - subsidy, the block size limit is reduced by half.


So essentially fees have a ceiling, once it's reached miners get more breathing room and fees will drop. Once that ceiling is reached again indicated by the fees collected miners again get more breathing room. If ever there's too much room and fees start getting lower then the space is made scarce in order to encourage higher fees.

What do you think?

p.s.: I have no clue if the numbers I used for max transactions in a block given 1mb size limit are correct, so please let me know if that is wrong and if doubling the size limit doesn't mean doubling the max transactions
sr. member
Activity: 440
Merit: 250
You'll all have to excuse my stupidity, but what's wrong with unlimited (*) blocks? Let each miner set his own transaction fees and free market competition will ensure that transaction fees are kept low while still keeping the network secure. Surely putting some artificial limit on blockchain size in order to drive up fees is little different to central bankers imposing QE or inflation targets on a currency.

Of course, bitcoin mining will migrate to where there is cheap energy, but this might have a beneficial side effect - more effort will be put into building cheap energy sources (read: renewable (**)) so any given geographical region can contribute to mining and so benefit from incoming tx fees ("free money" for want of a better description).

(*) up to some reasonable limit that shouldn't ever be reached, which will depend on how widespread bitcoin adoption is - visa handles a peak of about 10000 tx per second, assume normal use is 3000 tx/sec, assume 250 bytes per transaction and you'd need on average 0.5GB sized blocks every 10 minutes. For the immediate future, I think there's no reason the blocksize should increase beyond 10MB at the outside.

(**) hopefully renewable but admittedly, not necessarily so.
legendary
Activity: 2940
Merit: 1090
bullioner, have you seen this proposal?

https://bitcointalksearch.org/topic/m.1541892

What do you think..?

Same objection as above; its even right there below it where you pointed to.

However I am starting to get a sense that maybe part of why it is not blatant to everyone could be an artifact of scale.

It might be that the sheer size/power/longevity/pocketdeepness of "offender" one imagines it might/would take, as compared to the scale one might be contemplating as an organic progression of "growth of our existing network" or of "adoption rates" is very large.

If you persist in thinking of new players entering the game as newborn from tiny startup / granny's basement it might not seem oh so very likely to be a problem, afterall who are these basement-dwellers compared to the likes of Deepbit and Eligius and other "massive" pools.

But in reality, in the larger scheme of things, our vaunted "most difficult proof of work on the planet", our entire bitcoin network, is puny, tiny, trivially so.

How many puny little ASIC-manufacturing startups are there and how many of their products are deployed so far?

How much "smart money" delayed getting into bitcoins for a year due to there being no point in investing in "to be obsolete any moment now, wait for it, any moment,,, coming up... wait for it...." new hardware? Have you seen any indication yet that such gear could impact difficulty significantly? How many hundreds of millions of dollars, really, does all their currently in production product really add up to so far?

Once you blow a few hundred million on a few regional datacentres doesn't it just make sense to balloon/skyrocket blocksize hard and fast to clear all the obsolete players out of the game? What sense is there in blowing hundreds of millions of dollars on securing a network that cannot even handle your own hundreds of millions of users, (you are, like, facebook or google or yahoo or microsoft scale of userbase, or even something really out of left field like a retirement fund with that kind of number of shareholders considering monetising your "user" (aka shareholder) base by controlling the "pipe" through whch others might be willing to pay to get exposure to them, gosh knows. Left field is a vast, vast field, even without whatever parts of it might also be "outside the box"), but imagining there are no "big boys" out there is maybe rather naive.

Every player, all players, in this current puny prototype prevision of what this kind of software could potentially accomplish, even all of us combined, add up to trivial, tiny, puny, still, even if every chip of every wafer of every ASIC in the pipelines that we know of turns out to work perfectly with no error regions etc (100% yield).

Pretending we are oh so capable of swimming with big fish, oh so tough and resilient, that we should throw away our shark cage seems insanely naive, reckless, foolhardy, stupid, exactly what the sharks hope we will do.

-MarkM-
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
bullioner, have you seen this proposal?

https://bitcointalksearch.org/topic/m.1541892

What do you think..?
legendary
Activity: 2940
Merit: 1090
I like Gavin's proposal.  (I mean his actual proposal, not the "half-baked thought" quoted above.)

No hard limit, but nodes ignore or refuse to relay blocks that take too long to verify.  This discourages blocks that are too large, and "spam" blocks containing lots of transactions not seen on the network before.

I do not agree that it necessarily has any effect at all on blocks that are "too large", depending on who mines them and who they are directly connected to without intermediation of any of the proposed prejudiced nodes.

The top 51% of hash power can pump out blocks as huge as they choose to, everyone else is disenfranchised. You might as well try to stop a 51% attack by ignoring or refusing any block that contains a payment to a known major manufacturer of ASICs so the 51% attacker won't be able to buy enough ASICs to reach 51%. Oops, too late, they already are there. They but lack an opportunity for "spontaneous order" to hook them up into a "conspiracy" that is simply "emergent", not at all pre-meditated - in particular not premeditated-as-in-foreseen* by whoever got rid of the cap on block size, since they would seem to have apparently imagined some completely different "spontaneous order" than that in which whoever has the most [brute, in this case] force wins?

51% attackers can already do plenty of nasty things, now we're gonna hand them carte blanche to spamflood the whole network into oblivion too?

* No, wait, it has been foreseen, so surely if they implement it anyway it is, literally, pre-meditated, isn't it?

-MarkM-
Ari
member
Activity: 75
Merit: 10
I like Gavin's proposal.  (I mean his actual proposal, not the "half-baked thought" quoted above.)

No hard limit, but nodes ignore or refuse to relay blocks that take too long to verify.  This discourages blocks that are too large, and "spam" blocks containing lots of transactions not seen on the network before.

This might create an incentive to mine empty blocks.  To discourage this, in the case of competing blocks, nodes should favor the block that contains transactions they recognize, and ignore (or delay relaying) the empty block.
staff
Activity: 4284
Merit: 8808
Any "sum of transaction fees" reduces to miners picking whatever they want, since miners can stuff blocks with 'fake' transaction fees, with only the moderate risk of them getting sniped in the next block.

In any case, this thread fails on the ground that the starting criteria doesn't mention keeping Bitcoin a _decenteralized_ system. If you don't have that as a primary goal, then you can just drop this block-chain consensus stuff and use open transactions and get _vastly_ better scaling.

All those fixed parameters in the 'proposals' are stinking handwaving.  If you only care about preventing the fee race to the bottom, you make the change in maximum block size be half the change in difficulty on the upside, 100% of the change on the downside, clamped to be at least some minimum.  Doing so eliminates the fee collapse entirely by forcing miners to spend real resources (more hashpower) drive the size up.  ... but it doesn't do anything to prevent the loss of decentralization, so I don't know that it solves anything.
legendary
Activity: 1064
Merit: 1001
...
3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

I'm not in favor of this anymore. I explain why and provide an alternative, here:

https://bitcointalksearch.org/topic/m.1547661
legendary
Activity: 2282
Merit: 1050
Monero Core Team
legendary
Activity: 2940
Merit: 1090
[...]
How about
*To increase max block size by n%, more than 50% of fee paying transactions(must meet a minimum fee threshold to be counted) during the last difficulty window were not included in the next X blocks. Likewise we reduce the max block size by n%(down to a minimum of 1MB) whenever 100% of all paying transactions are included in the next X blocks.
[...]

This one is a no-go on technical grounds, because nothing in the blockchain tells people verifying it what trsnsactions were in whose memory pool when nor where, nor what their transaction fees were. Only miners whose memory pools were exactly synchronised, or who by sheer fluke happened to arrive at the same value despite having non-identical collections of pending/queued transactions in their pool, could arrive at the same value(s).

Thus they can make up any values they like, but, also, they aren't writing down in the blockchain what they thought the value was when they made their candidate block so not only can we not know whether they are lying as to that value but we also cannot tell whether they applied the rule correctly since we won't even know what untrue value it was that they would have / should have plugged into the rule.

Alternatively, possibly what is meant is that by looking at the timestamps of those ancient trasnsactions and comparing them to the timestamps of the blocks they got into, we are to guess how many blocks they must have waited before finally getting into a block; if so, then that maybe just adds a whole new reason to put strange timestamps into transactions; specifically, to timestamp a bunch of transactions with hours-ago timestamps and place them in blocks so that this archaeological rule will be fooled?

[/quote]
full member
Activity: 166
Merit: 101
Having spent a long time reading all of "How a floating blocksize limit inevitably leads towards centralization" and related threads, I want to summarise the proposals for introducing adaptive maximum block size to the protocol.

For focus's sake, please only join in with this thread if you are willing to operate under these assumptions for the scope of the thread:

  • in the long term, incentive to secure bitcoin's transaction log
       must come from transaction fees
  • Bitcoin should have potential scalability for everyone on the planet
       to make regular use of it, but equilibria are required to
       ensure scaling happens at a manageable rate

I appreciate that not everyone shares those assumptions, but please keep this thread for a discussion that locally accepts them for the sake of the discussion!

The idea of an adaptive algorithm has to be to make use of a negative feedback loop to achieve an desired equilibrium.
The canonical example of this in bitcoin is the way the difficulty of solving a block is related to the frequency of recently-found
blocks, producing an equilibrium around the interval of one block every ten minutes.

In the rest of this post I evaluate several proposals based on the equilibrium they are attempting to establish, making my own thoughts on the matter clearer towards the end.

First up we have:

How about tying the maximum block size to mining difficulty?
[...]

This provoked a fair bit of discussion.  The idea seems to be that miners will quit if it's no longer profitable for them to maintain
the full transaction log and mine.  It is unclear what equilibrium objectives this approach has, and I find it difficult to intuitively
say what equilibriums, if any, would be achieved by this adaptation.

Next up we have:

[...]
Second half-baked thought:

One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.

Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.

[...]

It should be clear that this does not scale very far.  The cost is linear, and is predetermined per unit of data.  By the time you've reached blocks of 8 MB, transactors are spending 100% of the monetary base per annum in order to have the transaction log maintained. The equilibrium is that block size is simply limited by the high cost of each transaction, but the equilibrium is not precisely statable in terms of desirable properties of the system as a whole.

We also have:

[...]
How about
*To increase max block size by n%, more than 50% of fee paying transactions(must meet a minimum fee threshold to be counted) during the last difficulty window were not included in the next X blocks. Likewise we reduce the max block size by n%(down to a minimum of 1MB) whenever 100% of all paying transactions are included in the next X blocks.
[...]

The issue with this is how you set the minimum fee threshold?  You could set it to a value that makes sense now, but if it turned out that bitcoin could scale really really high, the minimum fee threshold would turn out to be too high itself, and is not itself adaptive. This approach is going along the right lines, but it doesn't seem to stem from an quantitative, fundamental objective to do with the bitcoin network.

Also:

[...]
Here's yet another alternative scheme:

1) Block size adjustments happen at the same time that network difficulty adjusts

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.
[...]

This doesn't take the opportunity to link the size to transaction fees in any way, and seems vulnerable to spamming.

Then there's:


Since this locks in a minimum fee per transaction MB, what about scaling it with the square of the fees.

For example, every 2016 blocks, it is updated to

sqrt(MAX_BLOCK_SIZE in MB) = median(fees + minting) / 50
[...]

Same objection as to the 50 BTC per MiB proposal.  It just doesn't scale very far before all the value is being eaten by fees.
This time we get to around 64 MiB per block before 100% of the monetary base is spent per annum on securing the transaction log. Again, it is unclear what the objectives are for any equilibrium created.

Nearly finally, we have people suggesting that max block size doubles whenever block reward halves.  No dynamic equilibrium is created by doing this, and it's pure hope that the resulting numbers might produce the right incentives for network participants.

It seems to me that the starting point should be "what percentage of the total monetary base should transactors pay each year, in order to secure the transaction log".  This is a quantity about the system that has economic meaning and technical meaning within the protocol.  It basically keeps its meaning as the system grows. Why transactors?  Because in the long term holders pay nothing (the initial-distribution schedule winds down), and thus transactors pay everything.  That seems immutable.  Why per year?  This makes it easy for humans to reason about.  Annual amounts can be converted to per-block amounts once we're done setting the value.

Thus, this:


1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)

2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.

3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

is a pretty decent starting point.  It allows unlimited growth of the maximum block size, but as soon as transaction fees, which are what secure the transaction log, dwindle below the threshold, the maximum block size shrinks again.  Equilibrium around a desirable property of the system as a whole!  Easily expressed as a precise quantitative statement ("long term, transactors should pay n % of the total monetary base per annum to those securing the transaction log, so long as the
max block size is above its floor value").

The exact proposal quoted above sets the amount at 6.25%-12.5% p.a., whereas I intuitively think it should be more like 3%.  I would probably also just state it in terms of the mean transaction fee over the last N blocks, as that is more directly linked to the objective than whether half of transactions are above a threshold.  3% p.a.  would work out at a mean of 12 BTC per block, so would be 0.0029 BTC per transaction to lift the block size off its 1 MiB floor.  Seems about right.

My full reply and subsequent discussion with misterbigg is at
https://bitcointalksearch.org/topic/m.1546674 .

Hope that's a useful summary of all the adaptive algorithms proposed so far, even if you don't agree with the assumptions or
my conclusions.
Pages:
Jump to: