Author

Topic: Do you think BIP 106 would have solved the Mempool congestion due to BRC20? (Read 325 times)

legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
Quote
That doesn't make sense, unless such miner/pool have goal to bloat Bitcoin blockchain.
We should assume that there will be some bad actors, always producing as large blocks as possible. We had times with congested mempool, including everything "as is" will not solve anything. It will only make Initial Blockchain Download longer, and people will still produce bigger and bigger blocks.

Bad actor always exist, but even they have to consider propagation and verification time especially if their block contain many transaction which never broadcasted and they have limited funding. That means they need to share full block (rather than compact block[1] since there are many missing transaction) which makes verification and propagation become longer.

Quote
With such long verification time, it would hinder propagation where other miner could beat you since their block is far faster to be verified and propagated even though they mined it few seconds to minutes after you did.
Even if some big block will not end up in the main chain, many nodes will still waste a lot of time trying to verify it. So, that way or another, there will still be some "guessing time", unless you introduce some protections, like "after spending one minute on verification, stop it, and try another block", but then any such protection will put some artificial limit on block size, just not measured in bytes, but maybe in seconds instead (and then many machines will allow different blocks, based on their computational speed, so you will then never know, how many machines accepted some block or not).

Aside from maximum block size, sigops (signature operations) limit also exist which checked before the script is executed[2].

[1] https://bitcoincore.org/en/2016/06/07/compact-blocks-faq/
[2] https://bitcoin.stackexchange.com/a/117359
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Can't we have flexible block size? For example, each block confirming 4% of total transactions (numbers are only for example).
Apart from what ETFbitcoin wrote - it would simply be impossible to verify if the max block size is correct, because "the number of transactions in the mempool" is a different number in all nodes, and also grows over time - there is another problem with your idea: it would lead to a situation where a big part of the transactions would never be confirmed. If you set 4% of mempool transactions as a maximum per block, then on average only 4% of a larger period would be confirmed if the demand stays constant, so 96% would be discarded. This would lead to people trying again and again, demand growing all the time, and thus larger and larger blocks. It's even likely people create spam transactions just to increase the chance their legit transactions go through ...

You guys here underestimate how bad high transaction fees are for bitcoin adoption. Rich guys can pay high fees but poor or middle-class guys can't pay high fees. That's why they move on altcoins.
That's why sidechains (and similar technologies like rollups) are so interesting, but those that exist all have flaws. There's no completely decentralized design in production stage, Stacks is maybe the most advanced but still has centralized elements and relies on a premined altcoin to work. And of course LN.
copper member
Activity: 906
Merit: 2258
Quote
4% of total transactions
You don't know "the" total transactions. You know "a" total transactions, because there is no "the" mempool, but there are many mempools, and each node has its own. I can generate 1 TB mempool using any deterministic algorithm, and then say "we need 40 GB blocks now, because of 4% rule".

Quote
numbers are only for example
It doesn't matter, because if it would be any percentage, then it can be always bypassed by locally generating a lot of transactions.

Quote
Overall, I don't see anything wrong with if we increase current block size.
What about Initial Block Download? You can always decrease block size, but you cannot forget the history. It is always needed to create new nodes, unless you start making backward-incompatible changes. Note that if everyone will switch to pruned mode, then nobody can create a new node, because then there is nobody to connect to. You cannot sync trustlessly from a pruned node.

Quote
It's not expensive to run node today even if we double or triple block size and the demand isn't similar to the demand of 2010's.
Total blockchain size is only getting worse. Improvements in technology didn't make verification much faster, creating a new full node often took me something around a week. If you have around 500 GB of history to verify, it will still take a lot of time, even if some soft-fork will now force to include nothing else than the coinbase transaction.

Quote
That's why they move on altcoins.
Ideally, we should have sidechains for such things. Then, it could be possible to peg-in, create a lot of history, and then peg-out, finalize it on-chain, and drop everything what happened on some second layer, because it is then not needed for Initial Blockchain Download, and sidechain users don't need sidechain history for things that happened before their coins were pegged in (exactly in the same way as LN users can only store things since their channel was open, and they don't need all in-the-middle transactions from all other LN channels).

Quote
That doesn't make sense, unless such miner/pool have goal to bloat Bitcoin blockchain.
We should assume that there will be some bad actors, always producing as large blocks as possible. We had times with congested mempool, including everything "as is" will not solve anything. It will only make Initial Blockchain Download longer, and people will still produce bigger and bigger blocks.

Quote
With such long verification time, it would hinder propagation where other miner could beat you since their block is far faster to be verified and propagated even though they mined it few seconds to minutes after you did.
Even if some big block will not end up in the main chain, many nodes will still waste a lot of time trying to verify it. So, that way or another, there will still be some "guessing time", unless you introduce some protections, like "after spending one minute on verification, stop it, and try another block", but then any such protection will put some artificial limit on block size, just not measured in bytes, but maybe in seconds instead (and then many machines will allow different blocks, based on their computational speed, so you will then never know, how many machines accepted some block or not).

Quote
expending so much effort on producing an illegal block is stupid and exceedingly rare
Sometimes it is more common than you may think. For example, some mining pools have problems with counting sigops. And then, it is just another attack vector: if you know that there are some such pools, then you can create transactions with a lot of sigops, and broadcast them. Those transactions alone will be valid, and even included in other blocks, but some pool may produce a block that violates this rule.

And then, the question is: how you can check if sigops rule is violated or not, without checking the whole block? If you start mining on top of not-yet-validated block, then there is a risk that finally, one of those tricky rules will mark it as invalid, and then you will waste your coinbase-only block.
legendary
Activity: 990
Merit: 1108
If you are a block creator, then you have an incentive to create the biggest possible block. Why? Because then, you can pick some simple, deterministic algorithm, and generate terabytes of always valid transactions on-the-fly, then let your miners work on that header, and send mined block to other nodes. Then, you can start producing some next block, on top of what you created, while other block creators will try to validate what you submitted.
You appear to assume that other miners will wait to finish validating your new block before they start mining on top of it. That's not a safe assumption.

Many (most?) miners will instead just assume that the new block is valid (expending so much effort on producing an illegal block is stupid and exceedingly rare) and immediately start mining on top with an empty block, while validating in parallel. Once validation completes they can then continue mining with a non-empty block on top.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
If you are a block creator, then you have an incentive to create the biggest possible block. Why? Because then, you can pick some simple, deterministic algorithm, and generate terabytes of always valid transactions on-the-fly, then let your miners work on that header, and send mined block to other nodes. Then, you can start producing some next block, on top of what you created, while other block creators will try to validate what you submitted.

That doesn't make sense, unless such miner/pool have goal to bloat Bitcoin blockchain. With such long verification time, it would hinder propagation where other miner could beat you since their block is far faster to be verified and propagated even though they mined it few seconds to minutes after you did.

In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.
Can't we have flexible block size? For example, each block confirming 4% of total transactions (numbers are only for example). There will always be some unconfirmed transactions left because block only takes percentage. If 100K transactions are unconfirmed at the moment, 4K will get confirmed in next block, if the demand rises significantly and we have 300K unconfirmed transactions, then 12K will get confirmed in next block. By doing this, the less total transactions we have, the less transactions will be picked up which won't let transaction fees to fall down but at the same time, if demand significantly rises, it won't let big transaction volume to clog the mempool.
Blocks can be always full if block size is flexible and changes for each new block discovery, according to the volume that 3% of transactions create at moment. I don't know how technically possible is that but I don't think that sounds dumb in theory.

Each node have unique mempool or even disable mempool, so your idea can't be achieved since each node would set different maximum block size size limit.
hero member
Activity: 882
Merit: 792
Watch Bitcoin Documentary - https://t.ly/v0Nim
In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.
Can't we have flexible block size? For example, each block confirming 4% of total transactions (numbers are only for example). There will always be some unconfirmed transactions left because block only takes percentage. If 100K transactions are unconfirmed at the moment, 4K will get confirmed in next block, if the demand rises significantly and we have 300K unconfirmed transactions, then 12K will get confirmed in next block. By doing this, the less total transactions we have, the less transactions will be picked up which won't let transaction fees to fall down but at the same time, if demand significantly rises, it won't let big transaction volume to clog the mempool.
Blocks can be always full if block size is flexible and changes for each new block discovery, according to the volume that 3% of transactions create at moment. I don't know how technically possible is that but I don't think that sounds dumb in theory.

Overall, I don't see anything wrong with if we increase current block size. Time changes, technology improves, demand increases. It's not expensive to run node today even if we double or triple block size and the demand isn't similar to the demand of 2010's. You can't have 2010's block size in 2017 and 2017's block size in 2023.
You guys here underestimate how bad high transaction fees are for bitcoin adoption. Rich guys can pay high fees but poor or middle-class guys can't pay high fees. That's why they move on altcoins.
legendary
Activity: 990
Merit: 1108
In order for Bitcoin to succeed, blocks must be full.
Indeed; when you combine a negligible block subsidy with unlimited throughput, you beget a lack of security. We thus have the following

Emission/Throughput/Security trilemma: no coin can have all of the following 3 properties:

1) Capped Supply            2) Unlimited Throughput            3) Long-term Security

While BCH gave up 3, Bitcoin decided to give up 2, making it Congested-by-Design.
copper member
Activity: 906
Merit: 2258
Quote
without spam being an issue
Spam will always be an issue. No matter what limit will be set, block creators will always reach it. Set 1 MB as Satoshi did, and it will be reached. Set 4 MB as Segwit creators did, and you will also see fully filled blocks.

If you are a block creator, then you have an incentive to create the biggest possible block. Why? Because then, you can pick some simple, deterministic algorithm, and generate terabytes of always valid transactions on-the-fly, then let your miners work on that header, and send mined block to other nodes. Then, you can start producing some next block, on top of what you created, while other block creators will try to validate what you submitted.

Instead of thinking about block size alone, think about verification time. If blocks are produced every 10 minutes, but your blocks are so complex, that it takes 5 minutes to verify them, then guess what: other block creators have 5 minutes of "guessing time": you already know, if your block is valid or not. But others don't, and they have to decide, if they want to create the next block on top of not-yet-checked-block.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
Of course those who write that blocks must be often full for fees to be able to become a substantial part of miners' income are right. Nevertheless, I could imagine a variant of the dynamic block concept which could even guarantee a higher grade of security than the "flat" scheme.

Some considerations:

1) First, it's useful to define a "goal" of the algorithm. What do we want? Do we want to reduce fees? Or do we also want to guarantee an income from fees to miners and make spam expensive? I think the goal should be to find values where both goals can be satisfied to a certain degree.

2) So we could approximate to our goal: We could say that we want blocks to be full enough to make spam expensive, but also that we don't want to reduce demand (harming adoption and thus, the price and market cap) that Bitcoin's competitive position with respect to altcoins becomes worse, because in this case, Bitcoin's position as "the most secure one of all PoW coins" could be endangered and miners could switch to other coins. (As for the altcoin competition, I'm not thinking about coins like ETH or BNB which are clearly another kind of asset targeted to another public. But we could imagine a coin as decentralized as Bitcoin, maybe even an 1:1 copy, which could threaten to compete for that position).

3) As a consequence of 1) and 2), 100% full blocks would not be the ideal situation, but for example, an average of 98%, so fees can rise for some time in a demand spike, but if demand grows sustainably, then the fees should not increase so much that we won't risk to lose users (the well known "only banks use Bitcoin" scenario which was brought up to death by big blockers in the block size war). This 98% value is only an example, this should of course be investigated empirically. But the 90% mentioned in BIP 106 may be too low.

4) We can thus fine-tune the algorithm so its "goal" becomes blocks are always, on average, 98% full. If the average is > 98% in a whole difficulty period, in the next difficulty period of 2016 blocks the block size limit could be increased a minuscule bit (e.g. by 0,1%), but not too much - we would otherwise risk blocks not becoming "full enough". And if it's < 98%, then the maximum block size would shrink.

As a result, we would get an algorithm which can make blocks grow or shrink with demand, but slowly enough that the fee market is never in danger. Fees would probably become quite predictable, much more than with a static block size.

I however don't know if such an algorithm realy "is worth it". It would bring much more complexity into the incentive model. But in a future situation where demand for block space really grows strong and sustainably (without spam being an issue), I think it should be allowed to think about such a model.
legendary
Activity: 2394
Merit: 1216
The revolution will be digital
In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.

Very few people do understand the importance of this property of Bitcoin.

In coming days, layer 1 will only settle large value transactions and, with reducing CoinBase reward, incentivizing miners through transaction fees would become of prior importance to keep the network secure. Full blocks are inevitable to the competition high for block space. I agree that Satoshi's initial design did not have this cap. But, Satoshi is no God either.
legendary
Activity: 2968
Merit: 3684
Join the world-leading crypto sportsbook NOW!
segwit has not caused leaner transactions of actual byte counting.. they miscount the bytes and allowed longer scripts and even opcodes for scripts that are unrelated to signature proving a utxo spend
infact compared to pre 2017 the average byte length of a transaction has gone up MASSIVELY

But isn't that how all "compression" happens anyway with data? Even considering pruning, to the way I understand it, it's just discounting data, and thank you for the further explanations -- I doubt I can truly understand how it works, all I know is my txs are smaller in size from the way the wallet sees it. And that's how I see the theory of "leaner or more efficient" progress rather than bandwidth progress. It might require better cleaning up of clutter as you say, but I don't know enough to know why it wasn't done. I might need to see a visual example (but the way I understand is that the longer scripts anyway count for tx size, otherwise why did all these ordinals brc20 nonsense cost a lot?).

Pretty much out of my depth to be able to provide a meaningful response sadly...
legendary
Activity: 4424
Merit: 4794
^ 13000 tx every 10 minutes would be really cool, and Segwit making leaner txs still resulted in full blocks, I always expect to see this as the gradual way forward -- upgrades that result in leaner, more efficient txs, rather than just widening the bandwidth.

Then again, when I was in the 1990s thinking everyone would focus development on making better compressions for leaner data formats... spent hours on websites making sure they were as small as possible (in bytes). It went the other way (in my view) -- bandwidth just exploded, and people didn't care about efficiency anymore.

segwit has not caused leaner transactions of actual byte counting.. they miscount the bytes and allowed longer scripts and even opcodes for scripts that are unrelated to signature proving a utxo spend
infact compared to pre 2017 the average byte length of a transaction has gone up MASSIVELY

a 13.5k tx per block would occur by removing the miscount cludge code and actually allow full txdata utility of the entire 4mb space rather than the 1mb tx data+3mb witness.. which is currently hindering genuine transctions to a 'full block' 1.5mb average ability due to the miscount. or where junk data thats not relating to signature proof of utxo spend is allowed to fill the witness area to take up that 3mb excess.

a full clean standard 4mb blockspace for proper txdata utility+clean efficient signature proof would result in a 13.5k block of 4mb..(without the cludgy byte miscounts of segregation and crap)

its also worth noting..
cores cludgy math miscount of bytes for segwit(to make txdata more expensive then witness data). they are not incentivizing lean tx data. they are incentivizing cheap bloaty witness data by 4x.

funny part is asics dont choose transactions for a block so there is no extra computational cost for asics for X or Y type of transactions nor how many transactions appear in a block. so cores economic model of reasoning makes no sense to reality
especially when the computations of a pool manager validating a 2in-2out tx are:
utxoset -2 entry
utxoset +2 entry
for the txdata

yet require more computational power for the scripts:
read utxo entries to obtain spend keys twice
sha complete transactions to get txid twice
edcsa using key to check signature twice
(and other stuff)

so its actually the scripts that cost more computational power for the pool manager(not asics) but proves that witness(scripts) has more of a power cost than the tx data

yet they want to make normal use lean tx data be more costly by a 4x factor than the bloaty witness.

i do laugh when the plan of core roadmap is
'' oh economics economics, blah blah economics.. everyone should be rejected for wanting to spend 10,000sat for genuine real world purchases daily, using a lean signature scheme.. they should do that on a different network"
vs
'' oh economics economics, blah blah economics.. everyone should be allowed to spend 1sat for monkey memes of excessive script lengths of 3.96mb, monkey images should be on the bitcoin network"
legendary
Activity: 2968
Merit: 3684
Join the world-leading crypto sportsbook NOW!
^ 13000 tx every 10 minutes would be really cool, and Segwit making leaner txs still resulted in full blocks, I always expect to see this as the gradual way forward -- upgrades that result in leaner, more efficient txs, rather than just widening the bandwidth.

Then again, when I was in the 1990s thinking everyone would focus development on making better compressions for leaner data formats... spent hours on websites making sure they were as small as possible (in bytes). It went the other way (in my view) -- bandwidth just exploded, and people didn't care about efficiency anymore.

 
In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.

I may not see this is a hard rule, but I certainly can find some space to agree that the entire structure of returns for those securing the network (miners) was designed to ensure there was always incentive to mine. Coin generation at first (the subsidy as many say), then later the fees.

People give a lot of credit to adoption, commercial interest, recognition, for price. And credit due, but I still feel that the actual financial cost of securing the network, and the necessity of profit for the miner, still play that hard economic backbone.

By design, isn't it?
legendary
Activity: 4424
Merit: 4794
A. 1tx using 3.96mb paying 0.396btc  
B. 2500tx using 1.5mb paying 0.00015840(~$4.11) each -  each tx average 600bytes

C. 6500tx using 4mb paying 0.00006092(~$1.58) each -  each tx average 600bytes
D. 13000tx using 4mb paying 0.00003046(~$0.79) each -  each tx average 300bytes

all four options cause the pool managers to receive the exact same fee total(0.396btc). but have different economics options of how to obtain that total, via individuals paying different amounts.. and all options do not exceed the 4mb block limit that is deemed acceptable to core

those thinking the economics need to be A and B are ignoring the possibility of C or D

if you dont want C or D because you think A and B should be allowed more so than C and D. then you are one of those that dont care about more bitcoiners using bitcoin and you only want high fee annoyances and bloat to push people into using other networks

try to think about C or D options, especially when the logical option is D (lean transactions= more transactions, less individual fee but same economic total)
..

its also worth noting
ASIC miners dont always get FEE's.. many pools have their 'pay shares' system to have the pool manager keep the fees and give the main coin reward to the asic workers

and lastly those receiving sats from block rewards. base their economics on the market rate they can sell at. by not selling until the market price of bitcoin is satisfactory..
however
trying to get and force users to pay more then a reasonable fee just so an mining pool can sell at a loss on the market is bad economics
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.

This needs to be reposted more often:

Full blocks sounds like a naive way to solve the scalability issue but it directly harms the fee economy for the reasons mentioned. Now when spam enters the network, block space and thus fees, and eventually the overall Bitcoin price becomes a premium.

Same argument goes for increasing the rate at which transactions are processed - it would bring the fee wipeout problem with it too, only with more blocks.

This is why we have layer 2 networks.
hero member
Activity: 714
Merit: 521
The proposal is quite interesting because of the aim to make the fix blocksize become dynamic depending on the previous transaction.  If the implementation is perfect then I think it will solve the problem of BRC20 congesting the mempool but I believe the developer had some issues integrating BIP 106 to the network reason why it is not used.  It is probably because of possible flaws that may happen and can affect the Bitcoin network even worse.

Well i may say that all these proposals may not be all applicable to the solution to this BRC20 tokens despite that it has been fare for a couple of days now thay we had a cheaper transaction fee unlike when it got inflated by the token, so just as i was saying, if it will be only one out of these proposals will be suitable to solve the whole issue on that it should then be considered, obviously we cannot expect all to be in use at thesame time, i also want to believe that aside this, there are still many other possible ways we can completely deal with this bitcoin token inscription.
legendary
Activity: 4424
Merit: 4794
"emergent/dynamic" blocksize idea's all got REKT due to the core power house wanting to stay on their roadmap of offramping users to other networks (stop using the bitcoin network is their goal)

anyone speaking of wanting to allow more transactions on the blockchain always get demolished in social drama of trying to moderate out any such topics or suggestions

however "emergent/dynamic" blocksize would not have solved these meme/json junk. becasue spammers would spam more.. however
bringing back the byte limit of a transactions script length would solve it and make transactions more lean and efficient..

with that said as a different feature. "emergent/dynamic" blocksize would allow more users to transact(once the spam is solved by other methods) without users needing to pay a premium..
but if the meme/json junk was still able to occur then they would just spam even more.. so the meme/json junk ability of abusing the lack of transaction script length limits needs to be handled first.. and the core devs are declining to fix their flaw they caused that allowed the meme/json junk

the problem of the last 8 years is this
majority of bitcoiners want more transactions so that more users can be bitcoiners without being priced out of utility. but core want the opposite. by them allowing bloaty transactions where even just 1 tx can fill a block due to lack of rules(they relaxed the tx lean/efficiency checking rules) and then not wanting to expand the block for the last 6 years

the solution is lean transactions with no useless nonsense deadweight excess bytes. and a blocksize that allows more users to use bitcoin without experiencing the annoyances core employ to push people out of using bitcoin
hero member
Activity: 504
Merit: 1065
Crypto Swap Exchange
Indeed, BIP 106 is an extremely interesting proposition. I discovered it thanks to your post OP, thank you for sharing.

I'm surprised that it hasn't attracted more interest in 2015 and the little feedback we can find about the realism of its actual application.

In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.

This is a very valid point, from a long-term perspective. It's true that once the block reward will be low, if the fees were low, we could see a decrease of the global network's hashrate - even if a 51% attack is difficult to consider today (because of 358.2 EH/s and the biggest pool -Foundry- "only" own 35% of the hashrate) doesn't means that it won't be possible in the middle/long term.

So yeah, as I understand, with BIP 106 you remove the mempool's congestion problems, but replace them with a potential problem linked to fees that are too low, and therefore to a mining income that is potentially too low to ensure the network's security. Are there any other points that would make this proposal negative for the network?
legendary
Activity: 4466
Merit: 3391
In order for Bitcoin to succeed, blocks must be full.

If blocks are never full, then there is never a reason for a transaction to pay more 1 satoshi in fees. As the subsidy is reduced, fees become more important. So at some point, full blocks will be necessary in order to ensure that the revenue is high enough to discourage a 51% attack.

Changes that attempt to prevent full blocks would directly impact the security of Bitcoin.
hero member
Activity: 1554
Merit: 880
pxzone.online
It is probably because of possible flaws that may happen and can affect the Bitcoin network even worse.
So what are the possible flaws? This is not something you can't predict of those "possible" flaws. There should an specific technical reason why it is not approved, i hope there is a discussion of it why they don't approve such scalable proposal.
legendary
Activity: 3052
Merit: 1281
Get $2100 deposit bonuses & 60 FS
The proposal is quite interesting because of the aim to make the fix blocksize become dynamic depending on the previous transaction.  If the implementation is perfect then I think it will solve the problem of BRC20 congesting the mempool but I believe the developer had some issues integrating BIP 106 to the network reason why it is not used.  It is probably because of possible flaws that may happen and can affect the Bitcoin network even worse.
full member
Activity: 214
Merit: 278
Because of the recent Mempool congestion due to BRC20 some discussions of past crossed my mind. Bitcoin Scalability Problem is not new. Block Size Limit Controversy was a byproduct of this. This gave us multiple BIPs (Bitcoin Improvement Proposal) by multiple experts. Below is a consolidated list of those all...


Out of all these, I have always voiced in favor of BIP 106 on BitcoinTalk. Second part of the proposal, i.e. Proposal 2 : Depending on previous block size calculation and previous Tx fee collected by miners, has always seemed most practical & effective to me.

Now, as almost 8 years have passed and the scaling issue is still daunting us, what do you think about this proposal?
Jump to: