Pages:
Author

Topic: 1000x throughput on transaction batching + privacy boost using Script Bitfields. (Read 407 times)

hero member
Activity: 718
Merit: 545
Just noticed this,

https://github.com/JeremyRubin/bips/blob/op-checkoutputshashverify/bip-coshv.mediawiki

from,

https://twitter.com/JeremyRubin/status/1130580983923654661

Seems similar, although the implementation is different. Some great use cases.

This bit rang..

Quote
Congestion Controlled Transactions

When there is a large demand for blockspace it can become very expensive to make payments. By using CHECKOUTPUTSHASHVERIFY, a large volume payment processor may aggregate all their payments into a single O(1) transaction for purposes of confirmation. Then, some time later, the payments can be expanded out of that UTXO when the demand for blockspace is decreased.

Nice name.. Smiley
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
@spartacus,
We need also to elaborate more on your idea about data being distributed between users instead of replicated in nodes.

Although your proposed algorithm/protocol does not exactly comply with this idea as users need to disclose the data eventually and it should go to the blockchain and stay there permanently, hence replicated,  it would be important to understand what this idea actually implies: Sharding of the state!

The data under consideration is nothing other than a part of the bitcoin state machine: which nested outputs of this output are still unspent? Trying to keep this data out of traditional full nodes, is such a concept. But we know that it is not the original design target and we are not in such a context. It is why I strongly recommend dropping such an idea and remaining focused on the core Spartacus Nested Output Protocol, SNOP. Invented this term right now  Wink
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
@Ali

The amount of data we are now talking about is quite significant. To keep track of 1 billion outputs (1024*1024*1024), as a bitfield, in a centralised fashion requires 128MB. And that is for 1 single output, and does not include all the proofs (then it's really big)!

By using this scripting technique (with the dreaded covenants.. i like, you no like), instead of the miners storing it, all the data is kept by the users. And each user will only store the tiny amount relevant to themselves  (no different to storing a public key - it's not even secure data.. just proofs).
A hypothetical 1 billion nested outputs txn can be handled by making a time/space trade-off. Nodes can implement it without storing a single extra byte and querying the blockchain for the history, they can maintain an index to make it more efficient.

There is no way to "distribute" data between users, eventually every node has to approve that a single nested output is not double spent and they need data, a copy of it to reach to a consensus, it is how blockchains work, remember? In each spending attempt, nodes have to verify that it is not attempted before and they need to either query the history or check an exclusively maintained  data structure for speed purposes. a bitfield defintely.


I'm going to have to re-raise your '..whole covenant thing and txn commitment to bitfields which is absolutely unnecessary..' and say 'Come on then - what's your way that is simpler, cleaner and more efficient than this way ?'

 Wink
In my proposal everything is straightforward: users maintain their proofs of leafs and supply it along with other supplementary data to fullfil the nested output script (pubkeys, signatures, scripts, etc.) nodes maintain 1024 long bitfields internally and tick the respected nested output as spent eventually.

Only the first 'spend' of the original transaction would need to post the next bitfield transaction - (not all of them - a nice optimisation). And then as usual 1024 normal transactions can be made.

..

Go large! .. with a Triple-Decker.. and we get a billion outputs.. from a single txn output..  Roll Eyes.. (not sure what for..)

This 'optimization" you are talking about could be more "optimized" by not requiring the bitfield at all! Because it is not the user's problem.

This would be an anomaly to have weird txns that carry such scripts which nodes have to update them  incrementally (in your optimized version), it is more clean and consistent to have them keeping track of nested outputs like what they already do for normal outputs.
hero member
Activity: 718
Merit: 545
@Ali

The amount of data we are now talking about is quite significant. To keep track of 1 billion outputs (1024*1024*1024), as a bitfield, in a centralised fashion requires 128MB. And that is for 1 single output, and does not include all the proofs (then it's really big)!

By using this scripting technique (with the dreaded covenants.. i like, you no like), instead of the miners storing it, all the data is kept by the users. And each user will only store the tiny amount relevant to themselves  (no different to storing a public key - it's not even secure data.. just proofs).

The amount of potential data could be very large for any entities to hold in full. By using these scripts we get around all of that. Each user will only have to store small relevant amounts of data, which they present at spend.

....

I'm going to have to re-raise your '..whole covenant thing and txn commitment to bitfields which is absolutely unnecessary..' and say 'Come on then - what's your way that is simpler, cleaner and more efficient than this way ?'

 Wink
hero member
Activity: 718
Merit: 545
Actually - it seems perfectly possible to off-board 1 million users in a single transaction.. and without making the Bitfield any bigger.

Recursive Bitfield Scripts! .. (lol.. of course)

So same as before - we have a 4 hash bitfield to allow 1024 outputs from a single output.

But each output is to another Bitfield script.

Only the first 'spend' of the original transaction would need to post the next bitfield transaction - (not all of them - a nice optimisation). And then as usual 1024 normal transactions can be made.

..

Go large! .. with a Triple-Decker.. and we get a billion outputs.. from a single txn output..  Roll Eyes.. (not sure what for..)
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a
an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer.

I'm a big bitcoin fan. Period.. If we want this teck to be used it needs to function within the realms available. Scripting upgrades are all soft-fork. It can _already_ be done on Liquid. That makes it 1,000,000 times more likely to be used. Frankly it _will_ be useable if Bitcoin simply follows the current upgrade path. no changes to the current proposed changes required...
I'm  bigger fan, but I don't see bitcoin a dead project.
My proposal requires no hard-fork. Implementation problems have nothing to do with the protocol. For instance you can implement bitcoin protocol (very inefficient tho) without maintaining a data structure for the UTXO, hence, adding or not an extra data structure is an implementation choice which you are choosing to avoid because you are afraid of touching the sacred bitcoin core code and you are ready to sacrifice the whole idea to convince them about their stupid "core" thing not being touched  Cheesy

In either approach you have no chance to get it done, 0.000000 * 10^6 = 0

We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever!
The work Core does cannot be over-estimated. They get a big THANK YOU! from me everyday of the week and twice on Sundays.
You mean underestimated obviously and I'm not the one who underestimates anybody. We already know what happens to this idea it will be neglected or somebody will show up lecturing about unacceptable consequences of the backup problem and overlooking the huge on-chain scaling advantages, because we have a stupid second layer solution for it and all we have to do is keeping bitcoin as is. Period.


So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?

Awful name. Smiley

----------------------------

IF I was integrating this into a brand new coin.. then there might be ways of making this process cooler.

And for that I'm all ears.

I was thinking that if you had to off-board 1 million users from a side-chain that was under-attack, you could get them all back on-chain in 1 block. If you made the bitfield larger.. even less.

I like the spirit  Smiley
hero member
Activity: 718
Merit: 545
I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a
an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer.

I'm a big bitcoin fan. Period.. If we want this teck to be used it needs to function within the realms available. Scripting upgrades are all soft-fork. It can _already_ be done on Liquid. That makes it 1,000,000 times more likely to be used. Frankly it _will_ be useable if Bitcoin simply follows the current upgrade path. no changes to the current proposed changes required..

We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever!

The work Core does cannot be over-estimated. They get a big THANK YOU! from me everyday of the week and twice on Sundays.

So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?

Awful name. Smiley

----------------------------

IF I was integrating this into a brand new coin.. then there might be ways of making this process cooler.

And for that I'm all ears.

I was thinking that if you had to off-board 1 million users from a side-chain that was under-attack, you could get them all back on-chain in 1 block. If you made the bitfield larger.. even less.
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.
He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho.

Smiley..

Yes, yes..

The point about including the bitfield is that it doesn't ask the miners to fundamentally change. My technique uses a little clever scripting, that's all. The miners do exactly what they normally do. They don't need to start storing extra data, and changing their core functionality. It's just a scripting upgrade - and miners process scripts very well. I think that has simplicity benefits but we can disagree.  
I don't think it is any different in regard to implementation complexities and probable side-effects. No matter you are championing for a
an improve in the script processing or what. Maintaining an extra data infrustructure for UTXO is not that complex and the costs involved in projecting the problem to scripting layer are not justifiable. It is generally a bad idea to solve a problem in scripting layer whenever it could be solved in core layer.

We need to forget about what Core devs say and think, they are not good at improving bitcoin, we are far better. They are under pressure of real worl bitcoin and whales, we are not, we can do anything and implement any idea, it is not our mission to keep bitcoin "un-compromised" it's Greg Maxwell's job and theme song, we need to get rid of succh stupid considerations and innovate and innovate, forever!

I think you are too excited about the covenant stuff. I've no doubts there would be an application for covenant scripting  but this is not the one!

Let's just don't start from covenants and focus on the core idea, I know you've started form covenants but here we are, no need to covenants at all!

I'll fork from your idea, I don't care about covenants and their applications what I care about is the core idea: scaling batch processing in bitcoin and right now we have the solution (thanks to your original idea):
adding a recursive definition of "unspent txn output' such that you can encapsulate more data to a transaction. it is what actually matters.

So, I'm considering a far more efficient version of your idea eliminating the whole covenant thing and txn commitment to bitfields which is absolutely unnecessary and to give you the whole credit I'm calling it "SpartacusRex protocol". Are you in or not?

hero member
Activity: 718
Merit: 545
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.
He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho.

Smiley..

Yes, yes..

The point about including the bitfield is that it doesn't ask the miners to fundamentally change. My technique uses a little clever scripting, that's all. The miners do exactly what they normally do. They don't need to start storing extra data, and changing their core functionality. It's just a scripting upgrade - and miners process scripts very well. I think that has simplicity benefits but we can disagree. 

We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.

Merkle trees are useful when plain cryptography is not enough, like in inter-chain communication. In this proposal you are attempting an uneasy task to compress thirty-something bytes per output even more.

I fully accept that the spend transaction will include more data.

The point was simply that the exchange or sidechain or who ever it is needing to do a large batch transaction, can now do so at a fraction of the fee and space requirements _initially_. In reality, as you say, by passing the burden of the fee / space to the spender. (Allthough spending may be a long time in the future when space is less of an issue - at least we are on-chain)
mda
member
Activity: 144
Merit: 13
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.

Merkle trees are useful when plain cryptography is not enough, like in inter-chain communication. In this proposal you are attempting an uneasy task to compress thirty-something bytes per output even more.
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.
He is arguing that if you need large proofs to spend, it would make things even worse. Actually, it will be a fair objection if you insist on including bitfield on spend txns. My proposed improvements doesn't need such an overhead tho.
hero member
Activity: 718
Merit: 545
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.

The original transaction ( the one where the 'transaction batching' is going on ).. is still 1000x times smaller.. with a fee which is 1000x smaller. And you'll still have paid out to 1024 users.
mda
member
Activity: 144
Merit: 13
We store it in a bitfield. 1024 Bits is 128Bytes.
This bitfield goes into every transaction 1024 times. Adding to this Merkle path and comparing to 250 bytes of an average transaction I would doubt even 1x throughput.
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
Op. I think we have a common perspective now. It looks very simple and obvious to me, no covenants, complementary UTXO data and extra amountsList data being hash-committed in the txn, and passed to each user who we wish to convince about our fidelity.

Now let's take a look at the bigger picture:

One important problem would be maintaining the wallets by users. They have to keep track not only of their private keys/seeds now to retrieve their balance they need the extra proofs to be backed-up and kept safe, for each output and it is great inconvenience.
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.

No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat.  
No need to store it in the txn and if you do it should be considered immutable, useless for keeping track of further events (spends).
... It is stored in the scriptsig of the new output. This IS NOT IMMUTABLE. This is EXACTLY what covenants are for. The covenant makes sure the correct data is appended to the scriptsig of the output..  Storing which index has been spent and which have still to be spent - as a single bit.  
well, what I'm saying is that there is absolutely no need to all of this convenants thing and complicating the proposal. It is just like how nodes maintain the UTXO right now, they can just have extra bitfields for this class of outputs. Period.

Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to.

Please, carefully examine my solution and let me know about your concerns.

This ?.. Please elaborate.. '..confirming initially a hurdle..' ( I think we are not seeing exactly the same picture..)
Once the original txn is to be confirmed, this may be considered a problem, whether the creator has distributed the output amount faithfully or not? My solution is projecting this problem to each single receiver and maintaining the responsibility of nodes in the total sum control level.
hero member
Activity: 718
Merit: 545
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.

No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat.  
No need to store it in the txn and if you do it should be considered immutable, useless for keeping track of further events (spends).

... It is stored in the scriptsig of the new output. This IS NOT IMMUTABLE. This is EXACTLY what covenants are for. The covenant makes sure the correct data is appended to the scriptsig of the output..  Storing which index has been spent and which have still to be spent - as a single bit.  

As for cheating a user who does not have the full tree, it would be simple to use a SUM hash tree, so the parent uses the sum of the children in the hash value. The root has the total amount. Now the user KNOWS he has been given the correct amount - or the hash tree won't add up correctly. They do not need full access to the tree..

Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to.

Please, carefully examine my solution and let me know about your concerns.

This ?.. Please elaborate.. '..confirming initially a hurdle..' ( I think we are not seeing exactly the same picture..)
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.

No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat.  
No need to store it in the txn and if you do, it should be considered immutable, useless for keeping track of further events (spends).

How full nodes could ever verify that the payer has not over spent the input(s)?
The script allows a user to spend an exact amount. It does this by enforcing that the new output with the same address be of a certain amount (current - user_amount). All this info is available in the proof. If the user doesn't collect all that is his, it'll go to the miners. You get 1 shot, then your bitfield is set and you can't spend again. You have to get it ALL in 1 go.
Users don't need to have access to the whole hash tree and raw outputs. They need partial/relevant proof.

The user has to collect all of his output besides fees, it is how bitcoin works. It is not the problem. The problem raises when the original txn is getting to the blockchain first (no spends yet), it would be possible for malicious actors, paying multiple times to n users (each less than the total output) and convince each of them about the validity of their respected output because it is less than the total input.

Users having full access to the whole hash tree and raw outputs is just naive.

1- You may consider including the list of amounts in the txn body. It adds like 1024*8 bytes  (max) to txn size and reduces the effect you wish from 1000x to like 5x-10x . Not a smart solution.

The amount is already stored in the proof, and presented at point of use. Only the root of the hash tree is stored in the txn. All the information is presented by the user and it either fits or it doesn't. HASH (INDEX AMOUNT ADDRESS)+MERKLE_PROOF

Again, you are addressing the wrong problem. Spending is OK, but confirming initially is a hurdle: Full nodes have not access/ don't store full information and users are not supposed to.

Please, carefully examine my solution and let me know about your concerns.
hero member
Activity: 718
Merit: 545
4- Full nodes maintain an additional bitfield data structure for such txn types to keep track of spent/unspent status of leafs.

No - this data is available in the txn. Nothing special required. Using a covenant you can add data, in this case 4 hashes, to the scriptsig of the output. And enforce it. So it's available at the next spend. This is then updated, with a single bit set to 1, new data put on scriptsig, rinse, repeat. 

How full nodes could ever verify that the payer has not over spent the input(s)?

The script allows a user to spend an exact amount. It does this by enforcing that the new output with the same address be of a certain amount (current - user_amount). All this info is available in the proof. If the user doesn't collect all that is his, it'll go to the miners. You get 1 shot, then your bitfield is set and you can't spend again. You have to get it ALL in 1 go.

1- You may consider including the list of amounts in the txn body. It adds like 1024*8 bytes  (max) to txn size and reduces the effect you wish from 1000x to like 5x-10x . Not a smart solution.

The amount is already stored in the proof, and presented at point of use. Only the root of the hash tree is stored in the txn. All the information is presented by the user and it either fits or it doesn't. HASH (INDEX AMOUNT ADDRESS)+MERKLE_PROOF

An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.

This address is exactly the same as any other address you control. Just that to use it you need a private key AND a merkle proof. You can't lose your funds or have them spent.

Therefore, sure, spending it still requires a transaction.. BUT initially this setup, paying 1024 people, would have taken 1024 outputs, and now it only takes 1.

You can keep your coins there as long as you like.
legendary
Activity: 1456
Merit: 1176
Always remember the cause!
An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.
Spending outputs is not relevant. you are mixing things up.
mda
member
Activity: 144
Merit: 13
An interesting idea but you will need another 1024 transactions to spend from the batch. Therefore throughput becomes 2x, even less taking into account all overhead.
Pages:
Jump to: