Pages:
Author

Topic: Segmenting/Reserving Block Space - page 2. (Read 511 times)

legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
April 25, 2024, 12:33:39 AM
#15
I also don't like this idea, unfortunately.

Purpose is [...] to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees.
HeRetiK and vjudeu have already written that you can't really tell if a transaction contains metadata or not. And if there is a significant fee increase for a group of transactions, it will try to escape its "bin".

I'll give you a practical example so you see what could happen if such a proposal was implemented: Stampchain SRC-20 (a protocol created to "improve" BRC-20, an Ordinals-based token protocol which clogged the blockchain last year, but in fact it is still worse).

SRC-20 is an insanely inefficient and dangerous protocol: it encodes the metadata inside a regular multisig output, i.e. creates a "fake" public key with the data of a JSON(!) text. While these transactions may have some structural elements in common, in reality nobody can tell if you are transacting coins with such a transaction or if it's encoded metadata. If there was some heuristics detecting them reliably, they could simply change the protocol slightly and it wouldn't be detected anymore.

I said "dangerous", because this kind of protocol creates a ton of UTXOs which will never be spent, and all validating nodes must take them into account and waste resources. Similar protocols were already around in 2013/14 and motivated the "legalization" of OP_RETURN for arbitrary data storage of up to 80 bytes in 2014 (the opcode is the base for token mechanisms like Runes, Counterparty and Omni, it was already added by Satoshi but was non-standard until v0.9).

You will never be able to keep all versions of all those protocols under control. You would have to adjust the "rules" for the "bins" constantly and even then those wanting to store useless metadata would still be able to bypass your rules. Protocols could even try to offer several transaction mechanisms for the same type of token to fit in different bins, so the users could use the cheapest bin.

Quote
But from my perspective, it's likely something needs to be done to keep the network usable.

We have discussed some related ideas extensively in several Ordinals-related threads since about a year. The only idea which really could help is change the protocol to be more similar to Monero or even better Grin. Maybe a pre-reservation (link was already provided by ABCbits above) of block space could help at least to more "even" fee behaviour too but I'm not sure about that, this brings a lot of additional complexity. Even some Bitcoin devs proposed "solutions" which simply didn't work (Luke-Jr's heuristic code, Ordisrespector ...). In my opinion, the best way is to improve L2s (LN, sidechains, statechains etc.) to move as much transaction activity as possible off the main chain.
newbie
Activity: 18
Merit: 30
April 22, 2024, 08:09:27 PM
#14
I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR.

Perhaps P2TR could be its own transaction bin.

Quote
Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Miners could, but they'd risk having their blocks rejected from nodes.

I like the idea about quadratic weight assignment; but as you mentioned, it has it's own set of issues - though perhaps these issues are more straightforward and a consensus could be easier to reach.


Quote
The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin.

Purpose is not necessarily to raise fees (though it may in some bins); its more to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees. Perhaps the quadratic weighting solution is all that's needed. But from my perspective, it's likely something needs to be done to keep the network usable.

Quote
If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.

Nail on the head
legendary
Activity: 4466
Merit: 3391
April 22, 2024, 05:56:53 PM
#13
I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR. Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Perhaps there could be a different approach. A possible solution might be to modify the transaction weight calculation and determine a transaction's size based on quadratic weighting. Such a weighting would make a large transaction extra expensive and would discourage inefficient use of block space.

I think that abandoning the bin concept would simplify the solution tremendously. On the other hand, quadratic weighting might not solve the specific problem you are addressing, and it would certainly open up its own can of worms.


I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

I don't believe that a solution to the problem of keeping fees low even exists. The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin. If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
April 22, 2024, 11:43:15 AM
#12
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.

Technically, that is what I was referring to when I wrote "average", not the mean.

It would be like like taking the mean of 100 people's income including Bill Gates versus taking the median.
legendary
Activity: 3108
Merit: 2177
Playgram - The Telegram Casino
April 22, 2024, 11:42:18 AM
#11
This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level

That's the thing tho, neither is trivially solveable, if it even can be solved at all.

1) Reliable transaction classification would require leaky abstractions as I mentioned above. That's bad in regular software development, worse when it comes to the Bitcoin base layer. What exactly do you mean by "absolute memory size"? Are you referring to the size a transaction takes up in the mempool?

2) How exactly would you achieve dynamic consensus? Based on nodes is prone to Sybil attacks. Based on hashrate would lead to chain splits.
newbie
Activity: 18
Merit: 30
April 22, 2024, 11:32:28 AM
#10
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
April 22, 2024, 11:20:05 AM
#9
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.
newbie
Activity: 18
Merit: 30
April 22, 2024, 10:14:23 AM
#8
Thanks for all the thoughts.

Several things I'd like to point out:


Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type.


I don't believe a hard-fork should be required whenever allocations need to be altered. Allocations should be determined dynamically based on consensus rules.

This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level
     -Since tx mempool may be different between nodes (as pointed out by ABCbits). Dynamic determination of consensus rules may be challenging. But perhaps nodes can broadcast and maintain records of summary statistics of their tx mempools intermittently, from which, soft rules can be determined (soft tx type distribution boundaries allow
      minor discrepancies between nodes to be mitigated by allowing miners to pick out transactions near the mean/away from the requirement bounds of the tx inclusion criteria) 

     -Once a block is broadcast with the transactions included, other nodes should be able to verify that the included transactions meet the dynamically agreed upon tx type distribution requirements.

The way I envision this might occur is similar to how block difficultly is set and recognized across the network.

Changes to how the algorithm/heuristics used to determine tx type/size distribution requirements would require forking, but once set no forks are required.

Quote
Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

A large "other" category allocation partially resolves this. Another option is to base bins/categories on project agnostic metrics like tx size or "UTXO consolidation ratio" (something like if a tx has 3 UTXO input and 2 UTXO output (1.5 consolidation ratio), the network bins it (and prioritizes it) differently than a 1 UTXO input and 2 UTXO output (.5 consolidation ratio) transaction). The metrics would effectively be designed to bin transactions in a way that enables use of the network for any purpose, but keeps network traffic jams isolated to a fraction of the block.



As for switching transaction types/sizes to fill gaps in block allocation requirements as mentioned:

Quote
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.


Quote
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

I don't think this is necessarily a bad thing, especially if tx type conversions/alterations can only go from "more efficient/desirable" to "less efficient/desirable". Ie, you can't possibly convert an ordinal inscription to meet the criteria of the bin reserved for smaller/alternative transactions; despite that block partition having high fees/long wait times. BUT you could alter your efficient transaction to fit within the parameters of less efficient portions of the block partition. This ensures block capacity remains highly utilized.



Admittedly, I'm an SWE but don't have much hands on experience with Bitcoin source code. I may be missing something.


 
hero member
Activity: 813
Merit: 1944
April 22, 2024, 06:58:33 AM
#7
Quote
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This is bad idea, for many reasons. It should be applied in a local node policy, in the same way, as for example minimal transaction fees were picked. Then, it would be possible to change it, without forking the network.

Quote
For example, 20% of block reserved specifically for lightening transactions
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.

Quote
It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space)
This is not the case. If it would be, then miners could shrink the maximum size of the block into 100 kB. And guess what: any mining pool can introduce such rule, without even recompiling source code, because there is an option in the configuration file:
Code:
Block creation options:

  -blockmaxweight=
       Set maximum BIP141 block weight (default: 3996000)
And also, there are options in getblocktemplate command:
Code:
help getblocktemplate
getblocktemplate ( "template_request" )

...

  "sigoplimit" : n,                        (numeric) limit of sigops in blocks
  "sizelimit" : n,                         (numeric) limit of block size
  "weightlimit" : n,                       (numeric, optional) limit of block weight
So, if smaller blocks are so good for the miners, then why the biggest mining pools didn't introduce any such rules yet?

Quote
and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

Quote
but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.
It is acceptable, if you enforce it locally, on your node. But I think it is a bad idea to enforce it on consensus level.

Quote
Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types.
Why? The only requirement is to keep the coinbase transaction, everything else can be empty if needed (or artificially filled, if you mess up with the rules), and everything, what you want to add, could be done in "v2 blocks", pointed by the new coinbase transaction. So, it could be a soft-fork, but obviously it would be more complicated, than it should be: https://petertodd.org/2016/forced-soft-forks#radical-changes

Quote
how do you handle the fact that each node have slightly different TX set on their mempool?
That's why making local rules per each node is much easier, than including such things into consensus rules.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
April 22, 2024, 05:10:49 AM
#6
FYI, few months ago we discussed somewhat similar idea on A Proposal for easy-to-close Lightning Channels (and other uses).

Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

Bitcoin Core is just one of many Bitcoin full node software. Besides your idea probably require a soft/hard fork, how do you handle the fact that each node have slightly different TX set on their mempool?
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
April 22, 2024, 03:45:44 AM
#5
For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Who's going to decide on those percentages? As much as I'd like the spam to stop, I don't think some "central authority in power" is the right way to do that. I also see no reason to reserve 20% for the spammers.
legendary
Activity: 3108
Merit: 2177
Playgram - The Telegram Casino
April 21, 2024, 06:38:38 PM
#4
At the base level Bitcoin doesn't know anything about LN-related transactions, Sidechain-related transactions, Ordinals, Colored Coins, etc. It arguably also shouldn't know anything about these things, neither explicitly (like the flag that indicates SegWit transactions) nor implicitly (via heuristics, as suggested). That's why these things are on a separate layer to begin with. The alternative is a brittle base layer that becomes more unreliable as new features and transactions types are added.

Which leads to the next problem: Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type. Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

TL;DR this would likely cause a lot of problems both on a technical and a political/social level.
newbie
Activity: 18
Merit: 30
April 21, 2024, 03:39:25 PM
#3
Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

But I'm curious about the sentiment towards such an approach? Has the Bitcoin community considered such a solution before? If so, why hasn't it been implemented?
(Simply time/dev investment required or is there consensus on counter-arguments to this approach?)
legendary
Activity: 4466
Merit: 3391
April 21, 2024, 03:26:44 PM
#2
Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

I see two requirements:
  • 1. An unambiguous system for classifying transactions that matches your intent.
  • 2. A decentralized and verifiable method for allocating the partitions.
newbie
Activity: 18
Merit: 30
April 21, 2024, 11:40:08 AM
#1
Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This would help isolate the fee explosion due to certain transaction types. It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space) and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)

I think Bitcoiners tend to agree that we shouldn't limit the utility of Bitcoin by disallowing any sort of transaction, but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.

Seems feasible, different transaction types are already / could be made even more-so easily identifiable.

Of course, I understand this *may* result in some blocks having empty space depending on implementation, but it seems to me the tradeoff between enabling scalability through further facilitating LN and keeping the L1 chain reasonably open for large finality requiring transactions is well worth it.

Just looking to gather thoughts/sentiment towards an approach like this.
Pages:
Jump to: