Pages:
Author

Topic: BIP 106: Dynamically Controlled Bitcoin Block Size Max Cap - page 3. (Read 9404 times)

member
Activity: 133
Merit: 26
IMHO , the first proposal is good, if, we target for example a x% of average block's capacity.


For example, if on average blocks are 50% full and we target 66% then reduce block size,
if blocks are 70% full then increase block capacity. Let it test and see how affects the fee market.

The best thing about this is that, now we can target an average fee per block! :

if we are targeting 1btc per block in fees, and fees rise too much, lower the % full target, if fees decline rise the target.

There you go! Now people can vote for block increase by simply including higher fees!
sr. member
Activity: 452
Merit: 252
from democracy to self-rule.
The problem is blocks larger than the network's baseline resource will magnify centralization. This is a bad thing, but the question is not "how to make Bitcoin scalable". My understanding of scalability (please share yours if it differs from mine) is for a piece of software that attempts to consume as much resources as is made available. An example of a scalable system would be Amazon's ec2. The more physical machines support it, the more powerful it gets. Another one is BitTorrent, where the more leechers show up, the more bandwidth the torrent totals (i.e. bandwidth is not defined by seed boxes alone).

Your understanding is correct but Bitcoin is unlike anything in history. In traditional sense, like the examples you state, if a resource is getting fully utilized you add more of it & the key resources are ones that make that technology possible.

Although disk space is a resource, it is not a key resource in enabling the torrenting technology, in the sense that disk space existed before the invention of internet but that does not allow torrents to exist.
Inter-networking is the key resource that allows torrenting to exist. Now, what do you do if the network is fully occupied? You add more of it, problem solved.

With bitcoin, the network is a resource, but it is not the key resource, in the sense that networks existed before Bitcoin.
Blockchain is the key resource that allows bitcoin to exist. Now what do you do if blocks are full? You add more blocks. Ding! Not allowed, mate!

Blocks are essentially a list of transactions per unit time. So when we say we need to increase blocks, we mean we need to increase the rate of transaction throughput.

There are only 3 things in this equation that we can tweak:
1. increase the blocks. Not allowed, by definition of bitcoin; 1 per 10 minutes
2. decrease the time. Not allowed, by definition of bitcoin; each block comes out in 10 minutes
3. increase the block size. Allowed but practical limits of technology comes in. With each kb increased, download time for a block increases by milliseconds, and the miner who found that block, now has a headstart for these many milliseconds. Bigger the miner, more headstarts he gets and thus smaller miner leaves & this circle continues until only big miners are left. Complete centralization! Not an option.

To people looking at it in traditional way, miners might look like resources, so that more miners ought to mean more transaction throughput. But it does not for the same reason as more hard disk does not mean better torrent speed.

I would say the current issue with Bitcoin and big blocks isn't scalability but rather efficiency. We don't want to use more resources, we want to use the same amount of resources in a more efficient manner. Block size is like a barrier to entry: the bigger the blocks, the higher the barrier. Increasing efficiency in block propagation and verification would reduce that barrier in return, allowing for an increase in size while keeping the network healthy. I am not familiar with the Core source but I believe there are a few low hanging fruits we can go after when it comes to block propagation.

Also, I believe the issue isn't truly efficiency, but rather centralization. Reducing the barrier to entry increases participants and thus decentralization but the real issue is that there are no true incentives to run nodes nor to spread mining to smaller clusters. I understand these are non trivial problems, but that's what the September workshop should be about, rather than scalability.

If there is an incentive to run full nodes and if there is an incentive to spread mining, then block size will no longer be a metric that affects centralization on its own. Keep in mind that it currently is the case partly because it is one of the last few metric set to a magic number. If it was controlled by a dynamic algorithm keeping track of economic factors, we wouldn't be wasting sweat and blood on this issue today and be looking at how to make the system more robust and decentralized instead.

Bitcoin got to where it is today, I mean so much publicity and usage, because everything was taken care of, even the incentive of running full nodes.
What is that incentive, you ask? That incentive is bitcoin's survival.
The way to get something done is not always to reward, but sometimes punishment.
Here the punishment is bitcoin's death.

The reason for running the nodes is same as reason for feeding a goose that lays golden eggs.
But the problem here is that the people feeding the goose (people running nodes) are not the same as people collecting eggs(the miners).
People have difficulty in understanding indirect influences but they need to realize that it is they who are consuming gold, not the collector.
The miners don't even use bitcoin necessarily, but might only be doing it for fiat money.
So people feeding the goose must realize they need to keep doing so, because although directly it looks like the collector is getting rich but indirectly it is the feeders who want gold.

I would go a step further and say anybody not running a full node is not a bitcoin user in the true sense. Why?
Because the fact that your coins got transferred is only guaranteed by the history of those coins. And you don't have a copy of that history!
You are depending on someone else to supply a copy of history.
If its your brother in the family, who runs the full node for you to access, then it's fine. But for everything else, you are better off with banks.

There are counter points to this. And these counter points are only validly made by people who are happy using banks and trusting them but find other benefits in bitcoin, namely three:
1. Bitcoin is pseudonymous
2. Bitcoin has no geographical limit. Bitcoin has no monetary limit.
3. Bitcoin is 24x7, that is more than 3 times the bank opening time.

Now these use cases are huge & bring with them a lot of these people who trust others with their money, because the fire in the jungle hasn't reached their home yet.
They will keep running light wallets and enjoy these benefits, until the banks just tidy up and make themselves 24x7 & without limits.
Then all of these users would leave happily, coz banks have always kept free candy on the counter.
So I don't care about people who don't run full nodes and so shouldn't anyone caring about bitcoin.
full member
Activity: 165
Merit: 102
Thanks to everyone for providing good arguements for improvement of the proposal. I have derived a second proposal and updated OP accordingly. If you have any counter-arguement to this proposal, feel free to put it here or in the comment section of the article - http://upalc.com/maxblocksize.php
KNK
hero member
Activity: 692
Merit: 502
Sorry for the long post ...
TLDR: +1 for dynamic block size. I hope it is not too late for the right change

A dynamic algorithm can not magically instantiate the needed resources.
It doesn't need to! If properly implemented it will be the other way around (see below {1})

The reason I feel OP's proposal is beautiful is because it requires users to fill up nodes with high Tx volumes and then miners to fill up blocks from mempool.
Exactly, what should be used here:
 {1}
  • Hard limit size - calculated by some algorithm for the entire network (see below {2})
  • Client limit size - configured from the client (miner full node) based on it's hardware and bandwidth limitations or other preferences

Each node may set it's own limit of how big blocks it will send to the network, but should accept blocks up to the Hard limit

I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This is probably where consensus will be hardly achieved if it should be hard coded and not dynamic - cheaper transactions or bigger fees? Some want the first others the send and the truth is in the middle after both sides make some compromise, so it should also be kept in mind when planning the dynamic algorithm.

What I may suggest for the calculation of the Hard limit is:
{2}
 When calculating the new target difficulty, do the same for the block size.
 
  • Get the average size of the last 4000 nonempty blocks = AvgSize
  • Set the new block size to 150% of AvgSize, but not more than twice bigger/lower than previous block size

    How it is expected to work:
     The Hard limit is kept at 66% with 1 month moving average on each diff change.
     BUT it depends on the Soft limit chosen from the miners, so:
     
    • If the bandwidth is an issue (as it is for the most private pools and those in China) - they will send smaller blocks and thus Vote for the preferred size with their work
    • If there is a need for much bigger blocks, but the current status of the hardware (CPU or HDD) does not allow that - no increase will take place, because the clients won't send bigger blocks than configured
    • If there are not enough transactions to make bigger blocks - the size will be reduced

    EDIT: An option in the mining software to ignore blocks above Soft limit gives the control switch in each miner's hands in addition to the pools
EDIT 2: If you take a look at the average block size chart you will see that the current average size is far from the 1MB limit, if you ignore the stupid stress tests during the last month or two and even then the average is around 80%, so 2/3 (66% full) block size is a good target IMHO
legendary
Activity: 3766
Merit: 1364
Armory Developer
How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.

It does not but I would return this question to you: how is a doubling/halving of the block cap representative of the actual market growth/contraction that triggered the change? Difficulty variations are built in the blockchain and provide a very realistic perspective on the economic progression of the network, as they are a marker of profitability.

Keep in mind that my proposal evaluates total fee progression as well over difficulty periods, so in the case a new chip is released that largely outperforms previous generations and the market quickly invests into it, that event on its own would not be enough to trigger a block size increase, as there is no indication fees would also climb in the same fashion.

The idea is to keep the block size limit high enough to support organic market growth, while progressing in small enough increments that each increment won't undermine the fee market. I think difficulty progression is an appropriate metric to achieve that goal.

I've always thought fees should be somehow inversely pegged to difficulty to define the baseline of a healthy fee market. This is a way to achieve it.

Quote
People, the problem is not 'what' the limit should be & 'how' to reach it. The problem is that large blocks will kill bitcoin, so large blocks are not an option, what to do then is the question? How to make bitcoin scalable?

The problem is blocks larger than the network's baseline resource will magnify centralization. This is a bad thing, but the question is not "how to make Bitcoin scalable". My understanding of scalability (please share yours if it differs from mine) is for a piece of software that attempts to consume as much resources as is made available. An example of a scalable system would be Amazon's ec2. The more physical machines support it, the more powerful it gets. Another one is BitTorrent, where the more leechers show up, the more bandwidth the torrent totals (i.e. bandwidth is not defined by seed boxes alone).

I would say the current issue with Bitcoin and big blocks isn't scalability but rather efficiency. We don't want to use more resources, we want to use the same amount of resources in a more efficient manner. Block size is like a barrier to entry: the bigger the blocks, the higher the barrier. Increasing efficiency in block propagation and verification would reduce that barrier in return, allowing for an increase in size while keeping the network healthy. I am not familiar with the Core source but I believe there are a few low hanging fruits we can go after when it comes to block propagation.

Also, I believe the issue isn't truly efficiency, but rather centralization. Reducing the barrier to entry increases participants and thus decentralization but the real issue is that there are no true incentives to run nodes nor to spread mining to smaller clusters. I understand these are non trivial problems, but that's what the September workshop should be about, rather than scalability.

If there is an incentive to run full nodes and if there is an incentive to spread mining, then block size will no longer be a metric that affects centralization on its own. Keep in mind that it currently is the case partly because it is one of the last few metric set to a magic number. If it was controlled by a dynamic algorithm keeping track of economic factors, we wouldn't be wasting sweat and blood on this issue today and be looking at how to make the system more robust and decentralized instead.
sr. member
Activity: 263
Merit: 280
I had this same idea today, I created a new thread proposing it, and then I found that you had created one thread and developed the idea some days ago.

I give my 100% support to it, because max block size should be dynamically calculated (based on the 2016 previous block sizes) as Difficulty is dynamically recalculated every 2016 blocks.

Go on with it!!!
sr. member
Activity: 452
Merit: 252
from democracy to self-rule.
I don't know why people are calling this a good proposal. Either they don't understand the problem at hand or its me. I'd thankfully accept its me if you can explain, please.

Let us try to simulate this proposal.
Let us say the number of transactions are rising and are around 1 mb regularly. This algo will increase the cap accordingly.
Now, with global adoption, let us say, the number of transactions rise further. This algo raises the cap further.
Let us say there are 25mb worth of transactions now. This algo raises the cap to 25 mb, but does that work??

Due to the practical limits, a big block will take a lot of time to propagate in the network. During this time maybe another miner also successfully solves the block only to realize after a while that he isn't the first one, thus producing orphans. The second effect, and a very important effect, is that the winner gets a headstart. He starts working on the next block while the rest of the world is still working on the previous one while waiting to download the successful solution. As mining has now gone big, the effect of this headstart is huge and increases with more mining power a miner or a pool of miners have.

tl121 made this exact point.

There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.

The counter to this position, given by Gavin, is that his simulations show that this headstart does not have any effect.
But the counter's counter from the other side is that his simulations are not taking internet latency into account.

People, the problem is not 'what' the limit should be & 'how' to reach it. The problem is that large blocks will kill bitcoin, so large blocks are not an option, what to do then is the question? How to make bitcoin scalable?
legendary
Activity: 3430
Merit: 3080
While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.

Why would the fact that difficulty and blocksize are not related today, preclude that relationship from helping to solve a network problem? Explain why.
full member
Activity: 214
Merit: 278
While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.
sr. member
Activity: 433
Merit: 267
I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.

The reason philosophers use "ceteris paribus" is not because they literally know with absolute certainty all of the variables that they want to hold static, it's because they are trying to get at a specific subset of the problem. It's especially useful where testing is impossible, like here where we're trying to design a product that will be robust going into the future. Otherwise we'll get into a gish gallup.

So! The problem I'm pointing out is that we know, in the best case scenario, that just less than half of transactions will have any bidding pressure to keep transaction fees above the equilibrium price of running roughly one node, because by design we know the remaining half are less than 90% full. There is no reason to believe that this second half of transactions will be bid so high as to fund the entire network to the same, or better, rate as today. How does the protocol keep the network funded as inflation diminishes?

One could get close to this problem by suggesting that there is a time between checks (2000 blocks) which would allow greater than half of blocks to remain full, but if one is seriously suggesting that this should fund the network then one is simultaneously proposing that the block size limit should be doubled every 2000 blocks in perpetuity, otherwise this funding mechanism doesn't exist, and so haven't adequately addressed the problem. If that is a reasonable assumption to you, then the protocol can be simplified to read, "double the block size every 2000 blocks".

You state that there are larger blocks and therefore more transaction fees with this protocol. There is more quantity of transaction fees, but there is not necessarily a higher value of transaction fees. So again; How does this protocol keep the network funded as inflation diminishes? There is no reason to believe, even being optimistic, that those fees would be anything but marginally higher than enough to fund roughly one node at equilibrium.

That is not the only problem with this protocol, but it is the one I'm focusing on at the moment.
legendary
Activity: 3766
Merit: 1364
Armory Developer
I like this initiative, it is by far the best I've seen for the following reasons: it allows for both increase and reduction (this is critical) of the block size, it doesn't require complicated context and mainly, it doesn't rely on a hardcoded magic number to rule it all. However I'm not comfortable with the doubling nor the thresholds, and I'd would propose to refine them as follow:

1) Roughly translating your metrics gives something like (correct me if I misinterpreted):

- If the network is operating above half capacity, double the ceiling.
- If the network is operating below half capacity, halve the ceiling.
- If the network is operating around half capacity, leave it as is.

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

As for the increase threshold, I don't think your condition covers the most common use case. A situation where 100% of blocks are filled at 85% would not trigger an increase, but a network were 50% of blocks are filled at 10% and the other 50% are full would trigger the increase, which is a behavior more representative of a spam attack than organic growth in transaction demand.

I would suggest to evaluate the total size used by the last 2000 blocks as a whole, if it exceeds 2/3 or 3/4 (or whatever value is the most sensible) of the maximum capacity, then trigger an increase.

Maybe that is your intended condition, but from the wording, I can't help to think that your condition is to evaluate size consumption per block, rather than as a whole over the difficulty period.

2) The current situation with the Bitcoin network is that it is trivial and relatively cheap to spam transactions, and thus trigger block ceiling increase. At the same time, the conditions for a block size decrease are rather hard to sustain. An attacker needs to fill half the blocks for a difficulty period to trigger an increase, and only needs to keep 11% of blocks half full to prevent a decrease.

Quoting from your proposal:

Quote
Those who want to stop decrease, need to have more than 10% hash power, but must mine more than 50% of MaxBlockSize in all blocks.

I don't see how that would prevent anyone with that much hashing power from preventing a block size decrease. As you said, there is an economic incentive for a miner to include fee paying transactions, which reduces the possibility a large pool could prevent a block size increase by mining empty blocks, as it would bleed hash power pretty quickly.

However, this also implies there is no incentive to mine empty blocks. While a large miner can attempt to prevent a block size increase (at his own cost), a large group of large miners would be desperate to try and trigger a block size reduction, as a single large pool could send transactions to itself, paying fees to its own miners, to keep 11% of blocks half filled.

I would advocate that the block size decrease should also be triggered by used block space vs max available space as a whole over the difficulty period. I would also advocate for a second condition to trigger any block size change: total fee paid over the difficulty period:

- If both blocks are filling and the total sum of paid fees has increased at least as much as a portion of the difficulty (say 1/10th, again up for discussion) over a single period, then an increase in block size is triggered.
- Same goes with the decrease mechanism. If block size and fees have both decreased accordingly, trigger a block size decrease.

One or the other condition is not enough. Simply filling blocks without an increase in fees paid is not a sufficient condition to increase the network's capacity. As blocks keep on filling, fees go up and eventually the conditions are met. On the other hand, if block size usage goes down but fees remain high, or fees go down but block size usage goes up (say after a block size increase), there is no reason to reduce the block size either.

3) Lastly, I believe in case of a stalemate, a decay function should take over. Something simple, say 0.5~1% decay every difficulty period that didn't trigger an increase or a decrease. Block size increase is not hard to achieve as it relies on difficulty increase, blocks filling up and fees climbing, which takes place concurrently during organic growth. If the block limit naturally decays in a stable market, it will in return put a pressure on fees and naturally increase block fill rate. The increase in fee will in return increase miner profitability, creating opportunities. Fees are high, blocks are filling up and difficulty is going up and the ceiling will be bumped up once more to slowly decay again until organic growth resumes.

However in case of a spam attack, it forces the attacker to keep up with the climbing cost of triggering the next increase rather than simply maintaining the size increase he triggered at a low cost.

I believe with these changes to your proposal, it would turn exponentially expensive for an attacker to push the ceiling up, while allowing for an organic fee market to form and preventing fees from climbing sky high, as higher fees would eventually bump up the size cap.

full member
Activity: 214
Merit: 278
I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
There is no prerequisite that coinbase+mining fee needs to equal 50 btc. I understand that you are trying not to disturb the miner's subsidy. But, you are wrong in assuming ceteris paribus. Other things will not remain the same. When the subsidy stops, the transaction volume will be far higher than it is today. So, with increased block size, a miner will be able to fill up a block with much more Tx than it is now and thereby collect much more Tx fee. Moreover, you are also assuming value of BTC will remain same. With increased adoption, that's going to change towards the higher side as well. Hence, if the toal collection of Tx fee is same or even lower than what it is today (which wont most likely be the case), the increased price of BTC will compensate the miners.

So, forcing end users to a bidding war to save miners is most likely not a solution we need to adopt.
sr. member
Activity: 433
Merit: 267
I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
The absolute best case scenario in this algorithm, from the perspective of fees, is that slightly less than 50% of the blocks are 100% full, and people are so impatient to get their transactions into those blocks that they will bid up the transaction fees up to about 50BTC in total. That way the network would be funded at about the same rate it is today when inflation (The subsidy) stops, ceteris paribus.
full member
Activity: 214
Merit: 278
I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
This proposal does not negate full blocks. It has taken a demand driven approach. It is raising max cap only when more than 50% blocks are 90% full. It will decrease max cap if more then 90% blocks are less than 50% full. Hence the provision to collect Tx fee for each miner is always there. When it is increasing max cap because of full blocks, it means there are enough Tx in mempool to be cleared. When it is not there, we will see small blocks and max cap will automatically come down. Hence miners will never be starved off Tx fee.
sr. member
Activity: 433
Merit: 267
I hate to rain on the parade, but full blocks are an essential feature going into the future. Any proposal that tries to avoid ever having full blocks also must address how transaction fees are going to replace inflation as it diminishes.
If not, then there will be no funding for the highly redundant network that exists now, and it will necessarily atrophy to a handful of nodes; Being hardly less subject to coercion, malpractice, and discrimination than our financial system today.
legendary
Activity: 1662
Merit: 1050
Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.



True. It would be great if someone does this back testing and share the result. I think, at Genesis block, max cap can be considered as 1 MB. As the proposal has decreasing max cap feature, the outcome might be lower than 1 MB as well.
hero member
Activity: 966
Merit: 500
📱 CARTESI 📱 INFRASTRUCTURE FOR SCA
Very good suggestion!

I would additionally suggest to back-test the algorithm on the current blockchain, from day 1, starting with the smallest possible max size. Then see how it evolves, and then fine-tune the parameters if anything bad happens, or obvious possibilities for improvements are spotted. It could even be possible to auto tune the parameters for smallest possible max size by setting up a proper experiment. When it works great there is a big chance it will continue to work great the next 100 years.

legendary
Activity: 1662
Merit: 1050
There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.
As I can see, the advantage of this algo proposed by OP is machine learning. It is dynamically determining the next max cap depending on the current full blocks. Only if more than 50% of blocks are more than 90% full, then only the max cap will double. This means more than 50% of the blocks stored by the nodes in last difficulty period are already already 90% filled and market is pushing for more. In this situation, node has two ways. Either increase its resource and stay in the network or close down. Keeping a node in a network is not network's responsibility. Network did not show any responsibility to keep CPU mining either. Miners who wanted to be in the network upgraded their miner to GPU, FPGA and ASIC for their own benefit. Similarly, nodes will be run by interested parties, who has to benefit for nodes, e.g. miners, online wallet provides, exchanges and individuals with big bitcoin holding and thereby having a need to secure the network. All of them will have to upgrade resource to be in the game, because the push is coming from free market need.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.
BIP 101 is a linear increment proposal, where the laid out path has not been derived from market demand. It does not have a way out to decrease block size and there is no basis of the technology forecasting for the long run. And another hard fork is just next to impossible after wide-spread adoption. Neither of BIP 101 (Gavin Andresen) or 103 (Pieter Wuille) are taking into account the actual network condition. Both are speculative on technology forecasting.
sr. member
Activity: 278
Merit: 254
There has to be a maximum block size limit for bitcoin nodes to work.  The limit is not just a program variable needed for block chain consensus, it has real world implications in terms of storage, processing and bandwidth resources.  If a node doesn't have sufficient resources it will not be able to work as a properly functioning node. These resources have to be provisioned and managed by node operators who have to plan in advance to acquire the needed resources.  That is the reason for BIP 101 having a schedule for changes to the limits. A dynamic algorithm can not magically instantiate the needed resources.

The schedule in BIP 101 is based on technology forecasting.  Like all forecasting, technology forecasting is inaccurate.  If this schedule proves to be grossly in error then a new BIP can always be generated some years downstream,  allowing for any needed "mid-course" corrections.

legendary
Activity: 784
Merit: 1000
This is similar to what I was thinking, but perhaps better.

My idea was once a week or every two weeks:

avg(last week's blocksize)*2 = new maxBlocksize
Pages:
Jump to: