Author

Topic: EXTREMELY Rough Concept: Expandable UTXO space (Read 260 times)

member
Activity: 126
Merit: 30
It's possible what follows contains logical mistakes.

I was toying with a similar idea where each output would be its own Utreexo. Since that's a forest of trees, an output would need to keep the roots and each root would have the amount sum of the elements in the tree. This way, we'd know the amount the Utreexo UTXO holds and can do the inflation check.
Much like Utreexo, a transaction comes with is a list of inclusion proofs [proof1, proof2,...] which gives us the inputs. A transaction also defines the outputs that are created. We check the signature and that the transaction is well balanced and then delete the inputs from the Utreexo tree and add outputs as new elements to the Utreexo.
I'm not sure I remember correctly, but I believe anyone can delete an element if they have the forest roots and the inclusion proof and anyone can add an element if they have the element and the roots. Since we have both as part of a transaction validation, anyone can update the Utreexo accumulator.

This obviously isn't compatible with Bitcoin today, but may be an interesting direction to think in. Those interested in a specific Utreexo may have the tree saved locally and could share it with others in the tree if someone lost their inclusion proofs.
It may even be permissionless to put your UTXO in any Utreexo. Simply spend a regular UTXO and add it as an element to Utreexo which should be possible because we have the forest roots for all of them.

I admit I have not studied Utreexo as much as I would like I only heard about it a month ago or so, my idea was slightly inspired by the basic idea of Utreexo which was turning outputs into commitments to outputs for scaling purposes. But as far as I can tell the Utreexo concept is more like a lite-client scaling technique and probably needs to be reinforced with EC cryptography in my opinion in order to become something that can be represented to a regular full node.

I may be incorrect but I think the issue is the merkle tree being committed to is not some kind of a tagged branch commitment scheme like taproot but more like a direct hash tree. My theory is that something with tagged commitments would be possible to reproduce without needing to store inclusion proofs similar to how taproot doesn't require you to hold all the involved public keys but bare multi-sig does.


Edit: However your idea does seem like it would improve the user experience of utreexo potentially by allowing for users to share a network of known roots.
member
Activity: 60
Merit: 89
It's possible what follows contains logical mistakes.

I was toying with a similar idea where each output would be its own Utreexo. Since that's a forest of trees, an output would need to keep the roots and each root would have the amount sum of the elements in the tree. This way, we'd know the amount the Utreexo UTXO holds and can do the inflation check.
Much like Utreexo, a transaction comes with is a list of inclusion proofs [proof1, proof2,...] which gives us the inputs. A transaction also defines the outputs that are created. We check the signature and that the transaction is well balanced and then delete the inputs from the Utreexo tree and add outputs as new elements to the Utreexo.
I'm not sure I remember correctly, but I believe anyone can delete an element if they have the forest roots and the inclusion proof and anyone can add an element if they have the element and the roots. Since we have both as part of a transaction validation, anyone can update the Utreexo accumulator.

This obviously isn't compatible with Bitcoin today, but may be an interesting direction to think in. Those interested in a specific Utreexo may have the tree saved locally and could share it with others in the tree if someone lost their inclusion proofs.
It may even be permissionless to put your UTXO in any Utreexo. Simply spend a regular UTXO and add it as an element to Utreexo which should be possible because we have the forest roots for all of them.
member
Activity: 126
Merit: 30
So an idea like CoinPool?

https://coinpool.dev/v0.1.pdf
I never heard of it before, according to them the goal is:
Quote
CoinPool allows many users to share a UTXO and make
instant off-chain transfers inside the UTXO while allowing
withdrawals at any time without permission from other users.

My idea is basically the first part of this where many users share a UTXO, the off chain transfer idea is good but was not part of my plan. I am curious to see if this is DOA or still being developed.

So aggregated UTXOs that do not require lots of interactivity.

newbie
Activity: 2
Merit: 14
So an idea like CoinPool?

https://coinpool.dev/v0.1.pdf
member
Activity: 126
Merit: 30
Note: merge all of your consecutive replies into one post.

By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.

Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.

So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set (even if it represents thousands of bitcoins), and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.

And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.

I think what you are suggesting is a commitment based pruning of very old utxos which I think might be efficient for things that are not considered full nodes but I think would be controversial to push into a full node. My idea may be controversial also but not for the reason that it would require old UTXO owners to maintain their chainstate, but more-so because it implies a new convolution in the way that an output can be spent. However I want to be clear my idea does not include altering the way old chainstate is processed, it would only affect blocks that adopt it.

Edit: Although on second thought, I think there could be a way this model is used to scale old chainstate at the discretion of the node operator

Pushing it to Bitcoin Core? Oh yeah, most features proposed for it are definitely controversial. That's why a new full node client with these features could be created instead, bypassing the Core dev status quo. And, if the full node is any good, new people who want to get into the network will run this new pruning full node, while the old guard continues to run Bitcoin Core to avoid permanently losing old blocks.

Even a 10:1 ratio of these new nodes against Bitcoin Core nodes would be good for the network as it would have the adoption equivalent to L2 layers, and even these L2 features can be incorporated directly inside the node.

I would argue that if it is sound and an improvement it would be merged into core eventually.



Note: merge all of your consecutive replies into one post.

By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.

Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.

So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set (even if it represents thousands of bitcoins), and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.

And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.

I think what you are suggesting is a commitment based pruning of very old utxos which I think might be efficient for things that are not considered full nodes but I think would be controversial to push into a full node. My idea may be controversial also but not for the reason that it would require old UTXO owners to maintain their chainstate, but more-so because it implies a new convolution in the way that an output can be spent. However I want to be clear my idea does not include altering the way old chainstate is processed, it would only affect blocks that adopt it.

Edit: Although on second thought, I think there could be a way this model is used to scale old chainstate at the discretion of the node operator

Pushing it to Bitcoin Core? Oh yeah, most features proposed for it are definitely controversial. That's why a new full node client with these features could be created instead, bypassing the Core dev status quo. And, if the full node is any good, new people who want to get into the network will run this new pruning full node, while the old guard continues to run Bitcoin Core to avoid permanently losing old blocks.

Even a 10:1 ratio of these new nodes against Bitcoin Core nodes would be good for the network as it would have the adoption equivalent to L2 layers, and even these L2 features can be incorporated directly inside the node.

Generally speaking though a one way compression to the UTXO set would likely be considered a lite client not a full node regardless of if it was in Core or not. My feature ideally would be full node compatible.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Note: merge all of your consecutive replies into one post.

By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.

Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.

So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set (even if it represents thousands of bitcoins), and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.

And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.

I think what you are suggesting is a commitment based pruning of very old utxos which I think might be efficient for things that are not considered full nodes but I think would be controversial to push into a full node. My idea may be controversial also but not for the reason that it would require old UTXO owners to maintain their chainstate, but more-so because it implies a new convolution in the way that an output can be spent. However I want to be clear my idea does not include altering the way old chainstate is processed, it would only affect blocks that adopt it.

Edit: Although on second thought, I think there could be a way this model is used to scale old chainstate at the discretion of the node operator

Pushing it to Bitcoin Core? Oh yeah, most features proposed for it are definitely controversial. That's why a new full node client with these features could be created instead, bypassing the Core dev status quo. And, if the full node is any good, new people who want to get into the network will run this new pruning full node, while the old guard continues to run Bitcoin Core to avoid permanently losing old blocks.

Even a 10:1 ratio of these new nodes against Bitcoin Core nodes would be good for the network as it would have the adoption equivalent to L2 layers, and even these L2 features can be incorporated directly inside the node.
member
Activity: 126
Merit: 30
I think there is a problem with the fact that ultimately UTXOs must be addressed individually, so somewhere there must be a set of individual UTXOs. And while there is a potential optimization of the node software in classifying UTXOs according to the likelihood of being spent in the next few blocks, that would make the node software more efficient but it wouldn't affect the protocol.

Also, I suggest that you check out this thread that discusses adding UTXO commitments to the block chain: Ultimate blockchain compression w/ trust-free lite nodes


I agree with what you are saying, however, I do believe this is not what my proposal intends to do. Your comment is more in line with the comment that NotATether wrote, which I pointed out above has different implications than what I am proposing which would not affect the current chainstate.
legendary
Activity: 4522
Merit: 3426
I think there is a problem with the fact that ultimately UTXOs must be addressed individually, so somewhere there must be a set of individual UTXOs. And while there is a potential optimization of the node software in classifying UTXOs according to the likelihood of being spent in the next few blocks, that would make the node software more efficient but it wouldn't affect the protocol.

Also, I suggest that you check out this thread that discusses adding UTXO commitments to the block chain: Ultimate blockchain compression w/ trust-free lite nodes
member
Activity: 126
Merit: 30
What if we can create a UTXO which is an aggregate of other UTXOs, then when one wants to spend they can commit to an inclusion proof similarly to how one would in Schnorr?

If i understood your statement correctly, why don't we just perform UTXO consolidation? And on technical level, i fail to see how it's possible when each UTXO have different own unlocking/spend condition and maybe also have different script type.

So the basic idea is that we can consolidate / aggregate multiple UTXOs into one without needing to represent on chain that the aggregate UTXO is an aggregate theoretically (so it looks like a normal consolidation but it is actually spending to multiple parties who are (probably?) not sharing a Schnorr commitment).  I am not currently confident enough to give a reasonable solution of how it might work within Bitcoin script, I just want to float the idea around to get some opinions.

I think one technical constraint is how many UTXOs could be aggregated at once, I feel as if this will have a computational limit that should be considered as well as cryptographically avoiding collisions within the aggregate.



By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

Keeping in mind these are all just pie-in-the-sky ideas at the moment, my theory is that similarly to how a prover of a taproot script only has to reveal tagged branches of the commitment to spend, the prover of an aggregate UTXO would reveal tagged branches of the aggregate UTXO. Such that they don't need to prove for any other keys or outputs except for the ones that are explicitly theirs. So in some sense this means the UTXO itself needs to be cryptographically encoded to retain these commitments.

Edit: Furthermore Schnorr is batch validatable which as far as I have heard is good for efficiency, ideally these would have the same property which would preserve the scaling properties of Taproot/Schnorr.



By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.

Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.

So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set (even if it represents thousands of bitcoins), and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.

And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.

I think what you are suggesting is a commitment based pruning of very old utxos which I think might be efficient for things that are not considered full nodes but I think would be controversial to push into a full node. My idea may be controversial also but not for the reason that it would require old UTXO owners to maintain their chainstate, but more-so because it implies a new convolution in the way that an output can be spent. However I want to be clear my idea does not include altering the way old chainstate is processed, it would only affect blocks that adopt it.

Edit: Although on second thought, I think there could be a way this model is used to scale old chainstate at the discretion of the node operator
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?

All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.

Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.

So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set (even if it represents thousands of bitcoins), and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.

And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.
newbie
Activity: 21
Merit: 7
By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?
member
Activity: 126
Merit: 30
In a Schnorr signature we basically commit to an inclusion proof of an aggregate public key.

What if we can create a UTXO which is an aggregate of other UTXOs, then when one wants to spend they can commit to an inclusion proof similarly to how one would in Schnorr?

Thoughts? Obviously this is just a rough idea at the moment.
Jump to: