Author

Topic: Segmenting/Reserving Block Space (Read 519 times)

sr. member
Activity: 1190
Merit: 469
May 03, 2024, 09:46:06 PM
#35

Then you have again the same problems as if you "ban" OP_RETURN and Taproot. People will create, mint, and transfer tokens with fake P2(W)KPH transactions.

You have to think again about such authoritarian measures: they simply don't work in a decentralized network if there is an alternative which is allowed.

Even if vjudeu's method to identify spendable public keys works, these keys could be still faked, you only have some more restrictions in the "number space". Imagine a sort of "VanityGen" for public keys. You could even store the information in a private key, and compute the public key from it; this would make a transaction technically "spendable". Nobody stops you from publishing private keys if they are only used for information storage.

you seem like people sit around figuring out ways to spend their money inefficiently to store data on the blockchain. they don't. they never did and it never caught on.

but when you come along and introduce a 75% off sale (aka segwit) and then tell them they can store unlimited data into their 75% off transaction fee, well, why WOULDN'T that cause a problem?  Shocked

so go right ahead. store the entire king james version of the bible onto the bitcoin blockchain using OP_RETURN. see if anyone cares or complains. the only one complaining will be YOU. because it costed you so much money.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
May 03, 2024, 07:28:17 PM
#34
it can't be something people can opt out of or have to opt in to. It should be something that happens automatically based on their transaction type.
Then you have again the same problems as if you "ban" OP_RETURN and Taproot. People will create, mint, and transfer tokens with fake P2(W)KPH transactions.

You have to think again about such authoritarian measures: they simply don't work in a decentralized network if there is an alternative which is allowed.

Even if vjudeu's method to identify spendable public keys works, these keys could be still faked, you only have some more restrictions in the "number space". Imagine a sort of "VanityGen" for public keys. You could even store the information in a private key, and compute the public key from it; this would make a transaction technically "spendable". Nobody stops you from publishing private keys if they are only used for information storage.
sr. member
Activity: 1190
Merit: 469
May 03, 2024, 12:28:27 AM
#33
Yep, that's "in theory" the way I'd approve to take NFTs out of the Bitcoin blockchain. There was a quite elaborate proposal called L2O (website: https://l2o.io/) describing even a method to take BRC-20 tokens out to a sidechain (a zk rollup) and it would continue to "live" there.

I wrote "in theory" however because the problem the stubbornness of the Ordinals community to insist on their NFTs being available on "OG Bitcoin".
it can't be something people can opt out of or have to opt in to. It should be something that happens automatically based on their transaction type.

It is not needed. You can just point to any 256-bit value, for example into R-value of some signature, and then reveal your commitment on a separate chain. Because if you create "OP_RETURN ", then everyone will know: "hey, that transaction may have a monkey or something". But if you point to the R-value of your signature, then nobody knows that, it is then just a regular payment, and you know, that anything is committed to that, only if you connect to the separate network.
these people buying ordinals need the illusion that their data is being stored ON the bitcoin blockchain though. a sidechain would probably not really matter to them. most of them wouldn't even be technical enough to understand what was really going on. that there was a main chain and a side chain and the side chain is where all the bloat transactions got put... and not everyone that ran a node stored the sidechain.

Quote
This is another reason, why tweaking public keys and signatures is better: in that case, the size of your transaction is left unchanged, so you can keep all amounts, inputs, and outputs the same, which is important, because then you don't have to calculate both cases: "with the monkey" and "without the monkey". You only prepare one transaction, and just tweak your signatures accordingly.
i don't know how that would work where you could tweak public keys and signatures to store monkeys. i'm talking about with legacy type transactions. surely people could try but it's going to be expensive i would imagine! because they would have to do alot of transactions. that's why it never caught on in the first place i would imagine.

but anyhow i think you know more about some of these technical details than i do  Shocked
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
May 02, 2024, 08:58:27 AM
#32
"segregating the monkeys" method.
Yep, that's "in theory" the way I'd approve to take NFTs out of the Bitcoin blockchain. There was a quite elaborate proposal called L2O (website: https://l2o.io/) describing even a method to take BRC-20 tokens out to a sidechain (a zk rollup) and it would continue to "live" there.

I wrote "in theory" however because the problem the stubbornness of the Ordinals community to insist on their NFTs being available on "OG Bitcoin". The value proposition of these tokens and NFT collections was very much tied to the fact that they were stored on the Bitcoin blockchain. There are lots of cheaper chains, and I believe Litecoin or Dogecoin are nearly as secure as BTC when it comes to long-term availability, but the LTC and DOGE Ordinals spinoffs never really took off.

Ordinals NFTs are however much less of a problem now than in 2023 and I believe at least Ordinals is about to die (see particularly the post-halving stats, total Ordinals size on the Bitcoin chain since then is less than 10 vMB per day, which is equivalent to a little bit more than two blocks). Runes may be a little longer around, but they seem to have been a very short hype (most runes are already making losses only days after the halving).
copper member
Activity: 906
Merit: 2258
May 02, 2024, 02:59:11 AM
#31
Quote
BLOCK 456 would contain a hash of M BLOCK 456 and so on.
It is not needed. You can just point to any 256-bit value, for example into R-value of some signature, and then reveal your commitment on a separate chain. Because if you create "OP_RETURN ", then everyone will know: "hey, that transaction may have a monkey or something". But if you point to the R-value of your signature, then nobody knows that, it is then just a regular payment, and you know, that anything is committed to that, only if you connect to the separate network.

Quote
and the only increase in disk space they would see is one single extra hash from the M BLOCK
This is another reason, why tweaking public keys and signatures is better: in that case, the size of your transaction is left unchanged, so you can keep all amounts, inputs, and outputs the same, which is important, because then you don't have to calculate both cases: "with the monkey" and "without the monkey". You only prepare one transaction, and just tweak your signatures accordingly.
sr. member
Activity: 1190
Merit: 469
May 01, 2024, 11:50:02 PM
#30
The few problem seems to be an enduring one especially with the fact that, it fluctuates just like we’ve got with Bitcoin too, where the volatility on it would determine price.

Space reservation though as presented to solve fee problems might seem like a good idea but,
Bitcoin is just Bitcoin in the end. It’s the network and what it charges that we are really looking at here and what is it really at the miners end when a transaction is initiated, is there really a segmentation, I don’t think. This would make having to separate or decide an issue that shouldn’t even be as Bitcoin is only still Bitcoin.

Perhaps further developments as per having quick transaction broadcast would require another separation of the block spaces too, Bitcoin doesn’t need that, I think.

you could something like this:


MAIN CHAIN: BLOCK 456...BLOCK 457....BLOCK 458....

SIDE-BLOCK WITH MONKEYS: M BLOCK 456...M BLOCK 457...M BLOCK 458

BLOCK 456 would contain a hash of M BLOCK 456 and so on. But people running nodes and such could elect to not download the M BLOCKS. and the only increase in disk space they would see is one single extra hash from the M BLOCK. don't care about the monkeys and dont want to download them, then don't don't verify their hash. because you don't care about them and that's ok!

i call this the "segregating the monkeys" method.

full member
Activity: 203
Merit: 106
May 01, 2024, 02:29:41 AM
#29
The few problem seems to be an enduring one especially with the fact that, it fluctuates just like we’ve got with Bitcoin too, where the volatility on it would determine price.

Space reservation though as presented to solve fee problems might seem like a good idea but,
Bitcoin is just Bitcoin in the end. It’s the network and what it charges that we are really looking at here and what is it really at the miners end when a transaction is initiated, is there really a segmentation, I don’t think. This would make having to separate or decide an issue that shouldn’t even be as Bitcoin is only still Bitcoin.

Perhaps further developments as per having quick transaction broadcast would require another separation of the block spaces too, Bitcoin doesn’t need that, I think.
sr. member
Activity: 1190
Merit: 469
April 29, 2024, 12:45:21 AM
#28
Quote
just to solve a small problem

Susceptibility of what has the potential to be the backbone of the global monetary system to what are effectively DDOS attacks is a small problem in your opinion?

potential doesn't mean it will definitely do that. so you're working off of a possibly fault premise which could lead to all kinds of questionable judgements and conclusions.

but do you really think that segregating transactions into categories and trying to enforce quotas on each category within every block is a reasonable thing? apparently it's a genius idea and it's all we've been needing this whole time but no one thought of it until now.  Shocked now all you have to do is get it done.

imagine being willing to destroy your whole cryptocurrency just because of some monkeys popping up here and there.
newbie
Activity: 18
Merit: 30
April 28, 2024, 10:33:12 AM
#27
Quote
just to solve a small problem

Susceptibility of what has the potential to be the backbone of the global monetary system to what are effectively DDOS attacks is a small problem in your opinion?

I'm guessing you're probably pro-ossification lol.
sr. member
Activity: 1190
Merit: 469
April 26, 2024, 10:56:47 PM
#26

Merit doesn't mean "like" or "agree. In this case, it means it's worth reading, and not spam. And I'm loaded in sMerit.

if you disagreed with it then what made it worth reading exactly? to me it seemed like trying to go way overboard and hard fork bitcoin just to solve a small problem. the "cure" is worse than the disease if that makes any sense. you really want to change the entire structure of bitcoin blocks and require a hard fork just because of some monkeys? how is that worth even discussing?

i'm not against throwing a few merits to a new user trying to make contributions through their ideas just in this case, he's way off. try again, i would say.  Shocked
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
April 26, 2024, 01:11:23 PM
#25
Are there any promising solutions in development/consideration? Or is furthering development of LN the primary approach?
I think LN is still the "primary" approach in the Bitcoin community itself. It has a few problems though, for example the replacement cycle attack, which still make it unsuitable for larger payments.

Sidechain projects and concepts I know of, which are not federated - i.e. centralized management by a static multisig "federation" of users - are:

- Drivechain (problem: needs new opcode, is not well liked by some Bitcoin Core devs)
- Stacks (if everything works well it will be rolled out this year, problem: has a premined token for consensus)
- Nomic (already live, but the peg bridge is limited because it's in an audit process, has also a premined token for consensus)
- Forum user @vjudeu seems to be developing some sidechain too but I think it is still not public.

Federated sidechains are primarily Elements and RSK, which are live already for years.

Then there is the rollup concept, where the data is stored on a sidechain and on mainchain in a compressed form. It already works well on Ethereum, but the popular projects (for example, Optimism) also have premined tokens. On Bitcoin it would probably also need new opcodes. There's an info website for rollups on Bitcoin. I read recently that the Avail project is about to launch on Bitcoin.

There is also the extension block concept, which afaik was implemented in Litecoin for the Mimblewimble privacy technology. It's also a kind of sidechain. It was rejected however in 2017 when it was proposed for Bitcoin as an alternative to a block size increase.

Premined tokens is a problem because it makes the project centralized in some way, there will always be a founder group which is able to extract profits. This is a bit against Bitcoin's ethos, and thus sidechain concepts (except Drivechain) are mostly viewed as "altcoins" by many Bitcoiners. What I could imagine however, as most projects are open source, that you could fork these projects once they work and build a version without premined token.

I think the rollups concept and Stacks/Nomic are the closest to be implemented, and I expect some fully working for this year.
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
April 26, 2024, 08:18:58 AM
#24
but you merited OP's original posting which contained the above statement in bold. and now you're saying you don't like the idea? that seems like a contradiction.  Shocked
Merit doesn't mean "like" or "agree. In this case, it means it's worth reading, and not spam. And I'm loaded in sMerit.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
April 26, 2024, 04:18:24 AM
#23
Are there any promising solutions in development/consideration? Or is furthering development of LN the primary approach?

There are few things that comes to mind,
  • PLTC as replacement of HLTC, which commonly used by LN. PLTC has less on-chain TX size/weight.
  • Taproot Assets and RGB protocol, which enable token, NFT and others on LN.
  • Various Bitcoin sidechain and L2.

For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Who's going to decide on those percentages? As much as I'd like the spam to stop, I don't think some "central authority in power" is the right way to do that. I also see no reason to reserve 20% for the spammers.
but you merited OP's original posting which contained the above statement in bold. and now you're saying you don't like the idea? that seems like a contradiction.  Shocked

It's probably merit for OP's effort, i also do that on few occasion.
sr. member
Activity: 1190
Merit: 469
April 25, 2024, 10:10:37 PM
#22
For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Who's going to decide on those percentages? As much as I'd like the spam to stop, I don't think some "central authority in power" is the right way to do that. I also see no reason to reserve 20% for the spammers.

but you merited OP's original posting which contained the above statement in bold. and now you're saying you don't like the idea? that seems like a contradiction.  Shocked

even for people that dislike ordinals, i don't think they would agree with the OP's idea. and neither would people who use ordinals because their fees would go up but that may be giving them too much credit since i don't even know if many of them even follow bitcoin too closely even while spamming the blockchain..with their monkeys.

just as a simple example of things that could go wrong: say a particular block didn't have any ordinals. or not enough to fill up the 20% quota. that's wasted block space which could have been used to lower transaction fees for "ordinary" transactions.
newbie
Activity: 18
Merit: 30
April 25, 2024, 09:23:28 PM
#21
Thanks for the in-depth example.
Seems like this is a more difficult problem than it appears at the surface.

Are there any promising solutions in development/consideration? Or is furthering development of LN the primary approach?
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
April 25, 2024, 08:30:16 PM
#20
Consider the most simple case; say we wish to implement binary binning - one bin with a very precise and very common transaction op code signature, let's just say P2PKH: [OP_DUP, OP_HASH160, Hash160(seckey.pub), OP_EQUALVERIFY, OP_CHECKSIG] (I know this is pretty much a legacy op-code, just using it for simplicity).[...]
I can't imagine that any inscription could occur via this particular op sequence (could it?).
Yes, it can. You can encode the necessary metadata in the nSequence field, like EPOBC did, or create fake public key hashes aka addresses (P2PKH/P2WPKH) or public keys (P2PK). P2(W)PKH offers less bytes.

Basically to explain it in simple terms what you would do is to create a fake address with the data of the tokens. Let's say you represent something like P:DRC20:t:PEPE:v:2000 (p for Protocol, t for Token [symbol] and v for Value) first in hexadecimal numbers (503a44524332303a743a504550453a763a32303030) and then encode it into bech32 and this becomes an address (bc1qqqqqqqqqqqqqqqqqqpgr53zjgverqwn58fgy25z98fmr5v3sxqcqqyvute, you can try it here). Nobody would be able to spend this UTXO however, so it would clutter the UTXO set forever.

You create now an additional output with 1 satoshi to the address you want to mint/transfer the token to, optionally an output for change coins, and that's all what's needed - two or three P2(W)PKH outputs. While the bech32 address in this example looks a bit strange, this is only because I had to add zeroes to the hex value because it was too short (it has to be either 20 or 32 bytes).

Even if you create a completely new transaction type with even less data available, the "fake address" method would also work. You could try to separate transactions with more than 1 output, as it's difficult to encode everything in one P2(W)PKH ScriptPubKey but it would perhaps be still possible. But more important you would then make all transactions which have even one satoshi of change more expensive, so this would be unfeasible.
newbie
Activity: 18
Merit: 30
April 25, 2024, 05:58:56 PM
#19
Quote
I think you should really re-read my, Heretiks, odolvlobos and vjudeus posts to understand what's wrong with your proposal.

I believe I understand the point you all are making.
In summary, you feel as though there's no way to design bins in a manner which would prevent people from simply circumventing them by cleverly designing their transactions to fit into low fee bins (which may in turn actually make the issue worse because those transactions might become even less efficient).

 
Still, is there really no broad based way to accomplish something like this?

Consider the most simple case; say we wish to implement binary binning - one bin with a very precise and very common transaction op code signature, let's just say P2PKH: [OP_DUP, OP_HASH160, Hash160(seckey.pub), OP_EQUALVERIFY, OP_CHECKSIG] (I know this is pretty much a legacy op-code, just using it for simplicity).
Couldn't we help such transactions occur without issue by ensuring some fraction of each block is available for  such transactions?
I can't imagine that any inscription could occur via this particular op sequence (could it?).

If we can achieve this, could we not expand scope to parse out other common and precise transaction types?
The majority of the block could remain "all transactions" (including those which are specifically reserved).

Perhaps it's not possible with LN atm (I'm not sure of the op codes used to open/close LN channels), but it appears possible at a basic level.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
April 25, 2024, 12:54:12 PM
#18
Are there any op codes that are integral to ordinals/other inscription that aren't critical for facilitating true L1/L2 BTC monetary txs? OP_RETURN, OP_PUSHBYTES, OP_PUSHDATA come to mind, but I haven't studied Lightning / other L2s enough to be sure these aren't required.
But that's exactly the point! Mechanisms like Stampchain SRC-20 use only opcodes common in "normal" transactions (in this case OP_CHECKMULTISIG). Yes, OP_RETURN is (afaik) only used by "data transactions" (the other ones you mentioned, from my understanding, have other usecases), but it was made standard in Bitcoin 0.9+ to lower the impact of token systems and data transactions on the validating nodes.

Now imagine you "ban" OP_RETURN from the main bin and fees for OP_RETURN txes rise because their "bin" becomes congested - everybody wanting to use tokens on BTC would then use Stampchain or similar mechanisms, and the "main bin" becomes congested again (with worse consequences due to increased resource usage).

If you also ban multisig transactions from the main bin you affect Lightning, and multisig is not even necessary for such a protocol. There are older protocols that use the sequence number for metadata, e.g. the first version of EPOBC.

By the way, regarding Lightning: there may be situation where it would have advantages that "LN transactions" could get all the necessary block space. If you restrict LN transactions (in whatever way) to a 20% "bin" then you may for example delay the closure of channels if a massive node tries to attack.

I think you should really re-read my, Heretiks, odolvlobos and vjudeus posts to understand what's wrong with your proposal.
newbie
Activity: 18
Merit: 30
April 25, 2024, 12:18:11 PM
#17
Quote
I believe a partitioning of block space to increase the fees of data transactions is even less likely to get merged.

The intent of such partitioning *is not* to increase the fees of data transactions (though it may be a byproduct); the intent is to ensure that there's space in blocks available for BTC's intended usecase as a currency (this includes facilitating L2 transactions).


Quote
SRC-20 is an insanely inefficient and dangerous protocol: it encodes the metadata inside a regular multisig output, i.e. creates a "fake" public key with the data of a JSON(!) text. While these transactions may have some structural elements in common, in reality nobody can tell if you are transacting coins with such a transaction or if it's encoded metadata. If there was some heuristics detecting them reliably, they could simply change the protocol slightly and it wouldn't be detected anymore.

I said "dangerous", because this kind of protocol creates a ton of UTXOs which will never be spent, and all validating nodes must take them into account and waste resources. Similar protocols were already around in 2013/14 and motivated the "legalization" of OP_RETURN for arbitrary data storage of up to 80 bytes in 2014 (the opcode is the base for token mechanisms like Runes, Counterparty and Omni, it was already added by Satoshi but was non-standard until v0.9).

You will never be able to keep all versions of all those protocols under control. You would have to adjust the "rules" for the "bins" constantly and even then those wanting to store useless metadata would still be able to bypass your rules. Protocols could even try to offer several transaction mechanisms for the same type of token to fit in different bins, so the users could use the cheapest bin.

Are there any op codes that are integral to ordinals/other inscription that aren't critical for facilitating true L1/L2 BTC monetary txs? OP_RETURN, OP_PUSHBYTES, OP_PUSHDATA come to mind, but I haven't studied Lightning / other L2s enough to be sure these aren't required.
Perhaps, partitioning blocks based on tx op codes could be broad based enough to have fairly static rules?



I'm continuing to push this both for educational and brainstorming purposes.
I'd really love to see a solution that mitigates the potential for what are effectively DDOS attacks on BTC, while preserving the ability for BTC to be multi-functional and uncensored.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
April 25, 2024, 01:11:48 AM
#16
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

I don't believe that a solution to the problem of keeping fees low even exists. The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant.

This is basically what Luke-jr was trying to do a couple weeks ago with patching the datacarrier structures to interpret TapScripts (well not exactly increasing fees but to make large data transactions infeasible). It never reached a consensus obviously so it was never merged. It would've enforced a limit on the size of TapScripts. A similar failed pull request for enforcing the witness size limit is here.


I believe a partitioning of block space to increase the fees of data transactions is even less likely to get merged.
legendary
Activity: 3906
Merit: 6249
Decentralization Maximalist
April 25, 2024, 12:33:39 AM
#15
I also don't like this idea, unfortunately.

Purpose is [...] to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees.
HeRetiK and vjudeu have already written that you can't really tell if a transaction contains metadata or not. And if there is a significant fee increase for a group of transactions, it will try to escape its "bin".

I'll give you a practical example so you see what could happen if such a proposal was implemented: Stampchain SRC-20 (a protocol created to "improve" BRC-20, an Ordinals-based token protocol which clogged the blockchain last year, but in fact it is still worse).

SRC-20 is an insanely inefficient and dangerous protocol: it encodes the metadata inside a regular multisig output, i.e. creates a "fake" public key with the data of a JSON(!) text. While these transactions may have some structural elements in common, in reality nobody can tell if you are transacting coins with such a transaction or if it's encoded metadata. If there was some heuristics detecting them reliably, they could simply change the protocol slightly and it wouldn't be detected anymore.

I said "dangerous", because this kind of protocol creates a ton of UTXOs which will never be spent, and all validating nodes must take them into account and waste resources. Similar protocols were already around in 2013/14 and motivated the "legalization" of OP_RETURN for arbitrary data storage of up to 80 bytes in 2014 (the opcode is the base for token mechanisms like Runes, Counterparty and Omni, it was already added by Satoshi but was non-standard until v0.9).

You will never be able to keep all versions of all those protocols under control. You would have to adjust the "rules" for the "bins" constantly and even then those wanting to store useless metadata would still be able to bypass your rules. Protocols could even try to offer several transaction mechanisms for the same type of token to fit in different bins, so the users could use the cheapest bin.

Quote
But from my perspective, it's likely something needs to be done to keep the network usable.

We have discussed some related ideas extensively in several Ordinals-related threads since about a year. The only idea which really could help is change the protocol to be more similar to Monero or even better Grin. Maybe a pre-reservation (link was already provided by ABCbits above) of block space could help at least to more "even" fee behaviour too but I'm not sure about that, this brings a lot of additional complexity. Even some Bitcoin devs proposed "solutions" which simply didn't work (Luke-Jr's heuristic code, Ordisrespector ...). In my opinion, the best way is to improve L2s (LN, sidechains, statechains etc.) to move as much transaction activity as possible off the main chain.
newbie
Activity: 18
Merit: 30
April 22, 2024, 08:09:27 PM
#14
I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR.

Perhaps P2TR could be its own transaction bin.

Quote
Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Miners could, but they'd risk having their blocks rejected from nodes.

I like the idea about quadratic weight assignment; but as you mentioned, it has it's own set of issues - though perhaps these issues are more straightforward and a consensus could be easier to reach.


Quote
The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin.

Purpose is not necessarily to raise fees (though it may in some bins); its more to help smaller transactions and transactions excluding enormous amounts of metadata get processed in a timely manner and without insane fees. Perhaps the quadratic weighting solution is all that's needed. But from my perspective, it's likely something needs to be done to keep the network usable.

Quote
If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.

Nail on the head
legendary
Activity: 4466
Merit: 3391
April 22, 2024, 05:56:53 PM
#13
I don't think that determining the type of the transaction by looking at its contents is feasible, mostly because of P2TR. Requiring a process for allocating bins makes the solution extremely difficult, not just because process of consensus would be complex, but also because it could potentially be manipulated by miners.

Perhaps there could be a different approach. A possible solution might be to modify the transaction weight calculation and determine a transaction's size based on quadratic weighting. Such a weighting would make a large transaction extra expensive and would discourage inefficient use of block space.

I think that abandoning the bin concept would simplify the solution tremendously. On the other hand, quadratic weighting might not solve the specific problem you are addressing, and it would certainly open up its own can of worms.


I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

I don't believe that a solution to the problem of keeping fees low even exists. The purpose here is to raise fees for certain types of transactions, and I don't think the average network fee is relevant. When I want to send bitcoins, I care about my fee and not the average. My fee only depends on my bin. If desperate users increase their fees in order to get space in a different bin, then it doesn't affect me.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
April 22, 2024, 11:43:15 AM
#12
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.

Technically, that is what I was referring to when I wrote "average", not the mean.

It would be like like taking the mean of 100 people's income including Bill Gates versus taking the median.
legendary
Activity: 3122
Merit: 2178
Playgram - The Telegram Casino
April 22, 2024, 11:42:18 AM
#11
This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level

That's the thing tho, neither is trivially solveable, if it even can be solved at all.

1) Reliable transaction classification would require leaky abstractions as I mentioned above. That's bad in regular software development, worse when it comes to the Bitcoin base layer. What exactly do you mean by "absolute memory size"? Are you referring to the size a transaction takes up in the mempool?

2) How exactly would you achieve dynamic consensus? Based on nodes is prone to Sybil attacks. Based on hashrate would lead to chain splits.
newbie
Activity: 18
Merit: 30
April 22, 2024, 11:32:28 AM
#10
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.

The average fee doesn't matter nearly as much as the median fee IMO. This should help bring down the latter.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
April 22, 2024, 11:20:05 AM
#9
I don't like this idea. It doesn't solve the problem of keeping fees low and the mempool normal, because when you artificially limit the amount of bytes one kind of transaction gets in a block, desperate users will still increase the average network fee anyway.
newbie
Activity: 18
Merit: 30
April 22, 2024, 10:14:23 AM
#8
Thanks for all the thoughts.

Several things I'd like to point out:


Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type.


I don't believe a hard-fork should be required whenever allocations need to be altered. Allocations should be determined dynamically based on consensus rules.

This presumes two things are possible:
1.) Transactions can be accurately classified/segmented into meaningful bins (perhaps even simply binning by absolute memory size is sufficient).

2.) Consensus rules can be applied and verified at a network level
     -Since tx mempool may be different between nodes (as pointed out by ABCbits). Dynamic determination of consensus rules may be challenging. But perhaps nodes can broadcast and maintain records of summary statistics of their tx mempools intermittently, from which, soft rules can be determined (soft tx type distribution boundaries allow
      minor discrepancies between nodes to be mitigated by allowing miners to pick out transactions near the mean/away from the requirement bounds of the tx inclusion criteria) 

     -Once a block is broadcast with the transactions included, other nodes should be able to verify that the included transactions meet the dynamically agreed upon tx type distribution requirements.

The way I envision this might occur is similar to how block difficultly is set and recognized across the network.

Changes to how the algorithm/heuristics used to determine tx type/size distribution requirements would require forking, but once set no forks are required.

Quote
Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

A large "other" category allocation partially resolves this. Another option is to base bins/categories on project agnostic metrics like tx size or "UTXO consolidation ratio" (something like if a tx has 3 UTXO input and 2 UTXO output (1.5 consolidation ratio), the network bins it (and prioritizes it) differently than a 1 UTXO input and 2 UTXO output (.5 consolidation ratio) transaction). The metrics would effectively be designed to bin transactions in a way that enables use of the network for any purpose, but keeps network traffic jams isolated to a fraction of the block.



As for switching transaction types/sizes to fill gaps in block allocation requirements as mentioned:

Quote
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.


Quote
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

I don't think this is necessarily a bad thing, especially if tx type conversions/alterations can only go from "more efficient/desirable" to "less efficient/desirable". Ie, you can't possibly convert an ordinal inscription to meet the criteria of the bin reserved for smaller/alternative transactions; despite that block partition having high fees/long wait times. BUT you could alter your efficient transaction to fit within the parameters of less efficient portions of the block partition. This ensures block capacity remains highly utilized.



Admittedly, I'm an SWE but don't have much hands on experience with Bitcoin source code. I may be missing something.


 
copper member
Activity: 906
Merit: 2258
April 22, 2024, 06:58:33 AM
#7
Quote
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This is bad idea, for many reasons. It should be applied in a local node policy, in the same way, as for example minimal transaction fees were picked. Then, it would be possible to change it, without forking the network.

Quote
For example, 20% of block reserved specifically for lightening transactions
Guess what: if you would strictly require that, then people could switch from single-key addresses into 2-of-2 multisig, where both keys would be owned by the same person, just to bypass your limits.

Quote
It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space)
This is not the case. If it would be, then miners could shrink the maximum size of the block into 100 kB. And guess what: any mining pool can introduce such rule, without even recompiling source code, because there is an option in the configuration file:
Code:
Block creation options:

  -blockmaxweight=
       Set maximum BIP141 block weight (default: 3996000)
And also, there are options in getblocktemplate command:
Code:
help getblocktemplate
getblocktemplate ( "template_request" )

...

  "sigoplimit" : n,                        (numeric) limit of sigops in blocks
  "sizelimit" : n,                         (numeric) limit of block size
  "weightlimit" : n,                       (numeric, optional) limit of block weight
So, if smaller blocks are so good for the miners, then why the biggest mining pools didn't introduce any such rules yet?

Quote
and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)
No, because people will switch their other transaction types into what will be cheaper. Which means, that if single-key address will be more expensive than 2-of-2 multisig, then people will apply 2-of-2 multisig on their single-key transactions.

Quote
but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.
It is acceptable, if you enforce it locally, on your node. But I think it is a bad idea to enforce it on consensus level.

Quote
Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types.
Why? The only requirement is to keep the coinbase transaction, everything else can be empty if needed (or artificially filled, if you mess up with the rules), and everything, what you want to add, could be done in "v2 blocks", pointed by the new coinbase transaction. So, it could be a soft-fork, but obviously it would be more complicated, than it should be: https://petertodd.org/2016/forced-soft-forks#radical-changes

Quote
how do you handle the fact that each node have slightly different TX set on their mempool?
That's why making local rules per each node is much easier, than including such things into consensus rules.
legendary
Activity: 2870
Merit: 7490
Crypto Swap Exchange
April 22, 2024, 05:10:49 AM
#6
FYI, few months ago we discussed somewhat similar idea on A Proposal for easy-to-close Lightning Channels (and other uses).

Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

Bitcoin Core is just one of many Bitcoin full node software. Besides your idea probably require a soft/hard fork, how do you handle the fact that each node have slightly different TX set on their mempool?
legendary
Activity: 3290
Merit: 16489
Thick-Skinned Gang Leader and Golden Feather 2021
April 22, 2024, 03:45:44 AM
#5
For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Who's going to decide on those percentages? As much as I'd like the spam to stop, I don't think some "central authority in power" is the right way to do that. I also see no reason to reserve 20% for the spammers.
legendary
Activity: 3122
Merit: 2178
Playgram - The Telegram Casino
April 21, 2024, 06:38:38 PM
#4
At the base level Bitcoin doesn't know anything about LN-related transactions, Sidechain-related transactions, Ordinals, Colored Coins, etc. It arguably also shouldn't know anything about these things, neither explicitly (like the flag that indicates SegWit transactions) nor implicitly (via heuristics, as suggested). That's why these things are on a separate layer to begin with. The alternative is a brittle base layer that becomes more unreliable as new features and transactions types are added.

Which leads to the next problem: Segmenting blocks into pre-allocated spaces per transaction-type would require a hard fork whenever you want to change allocations and/or add/remove transaction types. Apart from general concerns of network stability, hard forks come with a lot of drama even with basic things as the blocksize in general (as seen in the fork wars of 2017). I don't want to imagine how this would look like if you'd have to get everyone to agree on allocations per transaction type. Honestly with the amount of projects in the space coming and going I'm not sure where we'd even begin. Worse still, any new project would get pretty much locked out of the blockchain, unless they somehow manage to get the devs to "approve" their transaction type and everyone else to accept their proposal for blockspace re-allocation (and thus a hard fork).

TL;DR this would likely cause a lot of problems both on a technical and a political/social level.
newbie
Activity: 18
Merit: 30
April 21, 2024, 03:39:25 PM
#3
Agreed on the requirements, neither of which appear insurmountable
(1. Will likely be some sort of heuristic system, factoring in things like tx memsize, inscribed data, etc
2. Such rules would be built in directly to Bitcoin core).

But I'm curious about the sentiment towards such an approach? Has the Bitcoin community considered such a solution before? If so, why hasn't it been implemented?
(Simply time/dev investment required or is there consensus on counter-arguments to this approach?)
legendary
Activity: 4466
Merit: 3391
April 21, 2024, 03:26:44 PM
#2
Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

I see two requirements:
  • 1. An unambiguous system for classifying transactions that matches your intent.
  • 2. A decentralized and verifiable method for allocating the partitions.
newbie
Activity: 18
Merit: 30
April 21, 2024, 11:40:08 AM
#1
Perhaps reserving some fixed (or slowly varying) portions of each block to specific transaction types could help resolve issues regarding the tradeoff between high fees and blockchain freedom.

For example, 20% of block reserved specifically for lightening transactions, 20% reserved for ordinals, 60% reserved for general use.
Miners including transactions outside of the prescribed memory boundaries limitations (hard or soft to account for fluctuations in mempool tx type distributions), would have such blocks rejected by the network.
This would help isolate the fee explosion due to certain transaction types. It's a win for the miners (massive transaction fees within this part of the block - possibly even higher sat/byte due to decreased available block space) and a win for the users (LN/general transactions can be included in blocks without exorbitant fees)

I think Bitcoiners tend to agree that we shouldn't limit the utility of Bitcoin by disallowing any sort of transaction, but I don't think it's against the Bitcoin ethos to enforce some structure around transaction priority.

Seems feasible, different transaction types are already / could be made even more-so easily identifiable.

Of course, I understand this *may* result in some blocks having empty space depending on implementation, but it seems to me the tradeoff between enabling scalability through further facilitating LN and keeping the L1 chain reasonably open for large finality requiring transactions is well worth it.

Just looking to gather thoughts/sentiment towards an approach like this.
Jump to: