Pages:
Author

Topic: Proposal: dynamic max blocksize according to difficulty - page 2. (Read 3040 times)

full member
Activity: 402
Merit: 100
🦜| Save Smart & Win 🦜
I like great ideas, but great ideas do not fair well on the forums.
thanks
Quote

Dynamic Block size with min. change to 2MB no doubt.

Whether or not the change is proportionate to difficulty of mining is not really what I would call relevant. The main concern is latency when blocks begin to get into the double digits but people overlook the added incentive of increased transaction volumes = increased incentive.
As I said the problem with latency is resolved when blocks get orphaned by the miners. This affects difficulty directly because the hash rate is affected by the wasted block - the hashes that went into generating the discarded block are lost.
Quote
We could keep 1MB until it starts slowing down the network and then go to a Fibonacci Sequence that increases dynamically based upon whether the block size is deemed too small by Core. Say if Mempools start crashing Nodes etc...

1MB,2MB,3MB, 5MB,8MB, 13MB, 21MB etc
That seems to be a part of another suggestion to hard fork. You would need to elaborate.
Quote
I really appreciate your time and contribution but I believe all the Core developers have already been bought out by Blockstream so they can be the First Lightning Network hub and take fees from miners and keep 1MB forever because they want to capitalize on the inability of the community to reach consensus without arguing.
I really detest any suggestion of politics or buying out of core developers in this discussion. FWIW I fully support the Lightning Network. I think its a fantastic idea. The Lightning network also needs an increase in max blocksize btw.
Quote
People like you are who keep bitcoin going. Thank you. Remember, We Are Satoshi.

was
thanks
Was
member
Activity: 75
Merit: 10
We are Satoshi.
I like great ideas, but great ideas do not fair well on the forums.

Dynamic Block size with min. change to 2MB no doubt.

Whether or not the change is proportionate to difficulty of mining is not really what I would call relevant. The main concern is latency when blocks begin to get into the double digits but people overlook the added incentive of increased transaction volumes = increased incentive.

We could keep 1MB until it starts slowing down the network and then go to a Fibonacci Sequence that increases dynamically based upon whether the block size is deemed too small by Core. Say if Mempools start crashing Nodes etc...

1MB,2MB,3MB, 5MB,8MB, 13MB, 21MB etc

I really appreciate your time and contribution but I believe all the Core developers have already been bought out by Blockstream so they can be the First Lightning Network hub and take fees from miners and keep 1MB forever because they want to capitalize on the inability of the community to reach consensus without arguing.

People like you are who keep bitcoin going. Thank you. Remember, We Are Satoshi.

was
full member
Activity: 402
Merit: 100
🦜| Save Smart & Win 🦜
Either my idea is so bad its not worth commenting or its so good nobody has been able to find an objection?

What am I missing?

Lets explore this proposal a bit, by examining different scenarios.

1. The hash rate shoots through the roof and difficulty increase at a disproporcionate rate: The max blocksize also increases and the risk of spamming the blockchain becomes more plausible. But this risk is negated by the fact that blockchain itself is a lot stronger because of the increased hash rate. The network might not be able to sustain such mega blocks. However if blocks becomed orphaned and all the hashing that went into building such a block disappears from the aggregate calculating difficulty, so it contributes to a lower difficulty and consequently to lower max blocksize.

2. The demand for blockchain space increases at a faster rate than the difficulty: In this case there will be pressure in fees, so miners will have more profits and consequently more incentivize to invest in hardware for mining. It will then lead to higher hashrates and consequently larger max blocksize which will then satisfy the demand for blockchain space.

3. The amount of transactions fees fall, leading to lower miner income and thus lower hashrate and thus lower blocksizes.

In summary, the chain of causation is like this

Increasing:

More demand for blockchain space -> more transaction fees -> increased miner profits -> increase in mining investment -> more hashing power -> increased difficulty -> larger blocks

Decreasing:

Less transactions -> less transaction fees -> decreasing miner profits -> less miners hashing -> less hashing power -> lower difficulty -> smaller blocks sizes

Another way to explore this proposal is assuming it was adopted earlier:

Lets suppose this BIP was incorporated when Satoshi established the 1MB max blocksize. This was in July of 2010. The average blocksize was about 1k then. Satoshi decides to set the max blocksize at 2k. The difficulty was about 1,379,000 and the current difficulty is 54,256,630,328. If this proposal had been implemented then the current max blocksize would be: 2000 * 57,432,508,738 / 1,379,000 =832,958,792 bytes (832MB). Clearly this is not right and the reason for such a large disparity is that obviously bitcoin mining hardware has had a lot of catching up to do (CPU>GPU>FPGA>Asic>Huh). However I believe this to be nearing its maximum efficiency and closing in with Moore's Law.

Lets instead suppose this BIP was incorporated exactly one year ago.

The max block size was 1MB and difficulty 27,428,630,902. This is roughly to half of what is now, so the max blocksize now would be 2MB. A much more reasonable increase. In fact it would be a very healthy limit right now. There is an 80% consensus that this should be the minimum  max blocksize increase.

Also the economic dynamics will be much more applicable when the block reward subsidy is reduced.

Other advantages of this proposal:

1. Very clear and simple. Everyone can see what the impact of such a hard fork will be.
2. Changes in the max blocksize will be gradual and change at a predicted rate, much like how difficulty will affect the profitability of miners.
3. Trivial to implement and test in testnet.
4. Ties changes to the max blockchain to the strength of the bitcoin network itself. The more mining power there is, the bigger blocks can exists.
5. Allows for wallets to more easily determine the right transaction fee per kilobyte.
6. Gives us steady adjustments of max blocksize for the long term, without requiring another hard fork in a while at least.

A further way to refine this, which I think is even better (at the cost of a bit more complexity), is to factor in the block reward subsidy. I would propose a formula of new maxblocksize * (1 +(1 - coinbase/50)).

So when the block reward is 25 BTC the increases/decreases are reduced by 50% of the difficulty change. When the block reward is 12.5 BTC, max blocksize increase/decrease are reduced by 25% and so on. When we reach 0 BTC block rewards, the max blocksize becomes on par with the difficulty.

I really wish some input on my proposal. Even though there is a question of whether the hashrate can grow to accomodate to the increasing demand of transactions and therefore more space in the blockchain (transaction fees should pressure this in the end), I think this solution is vastly superior to the other proposals so far.
full member
Activity: 402
Merit: 100
🦜| Save Smart & Win 🦜
I propose a very simple solution to determine the new maximum blocksize:

Increase or decrease to the maximum blocksize at the same rate as how difficulty is increased or decreased. This can be set at every 2016 blocks or a multiple of such.

For example currently the blocksize is 1,000,000B and the difficulty rate of the last 20160 blocks has increased by 9.72%  (from 49,446,390,688 to 54,256,630,328) then the new block size = 1,097,277B

We could do an initial jump to 2MB, since I believe there is consensus that this is what is needed to begin with.

Rationale: Even though hardware advances do not exactly match in different computing areas, for example network vs computation speed, it is roughly similar (perhaps some research can be made) to the degree that it gives us at least some guidance as to what the network can support in terms of block relaying latency.

A very big advantage is that implementation would be trivial.

Pages:
Jump to: