Pages:
Author

Topic: Segregated witness - The solution to Scalability (short term)? - page 18. (Read 23163 times)

legendary
Activity: 4396
Merit: 4755
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

The size of the block chain can be cut in half by moving all signatures(not needed to be stored anyways) into a separate data structure and only keeping the transactions without signatures in the block chain.

Side benefits as well
    Much simpler future opcode additions/upgrades
    Solves malleability problems
    Fraud proofs for every single consensus rule, making SPV much much more secure and lazy validation
   
Also can be deployed with softfork which without this upgrade would have been extremely difficult to implement.



Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).

"Size = 4*BaseSize + WitnessSize <= 4MB. For normal transaction load, it means 1.75 MB, but more for multisig."
https://twitter.com/pwuille/status/673710939678445571

to me this translates to creating a new chain. called the witness(pruned) chain. where all the old blocks are pruned of the the signatures.
this is not a soft fork.. this is remaking a new chain.

also because there still needs to be a chain containing the signatures.. that would be the real bitcoin which will still bloat..
and if anyone still wants to be a full node. then they need to now have 2 chains.. meaning more data as some tx data is duplicated by holding both

witness chain would supposedly be used for liteclients. but its much easier to just let lite clients only download the tx data of addresses they control. and then find a real solution to bitcoins data bloat, without creating a new chain or risking security bugs related to not having signature checks
legendary
Activity: 994
Merit: 1035
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

The size of the block chain can be cut down considerably by moving all signatures(not needed to be stored anyways) into a separate data structure and only keeping the transactions without signatures in the block chain.

Side benefits as well
    Much simpler future opcode additions/upgrades
    Solves malleability problems
    Fraud proofs for every single consensus rule, making SPV much much more secure and lazy validation
    
Also can be deployed with softfork which without this upgrade would have been extremely difficult to implement.

Basicly, this is an example of a scaling solution with absolutely no tradeoffs, The consequences are all positive. This is just one peice of the puzzle that needs to be rolled out to scale but objecting to this improvement is non-sensical.

Sildes- https://prezi.com/lyghixkrguao/segregated-witness-and-deploying-it-for-bitcoin/

Best yet, code is already done and been tested for over 6 months --

https://github.com/ElementsProject/elements/commit/663e9bd32965008a43a08d1d26ea09cbb14e83aa
https://github.com/sipa/bitcoin/commits/segwit


Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).

"Size = 4*BaseSize + WitnessSize <= 4MB. For normal transaction load, it means 1.75 MB, but more for multisig."
https://twitter.com/pwuille/status/673710939678445571

legendary
Activity: 994
Merit: 1035
legendary
Activity: 2674
Merit: 2965
Terminated.
Ok, so how are the transactions signed and does it increase the possibility of address collision? Hal Finley proposed batch signature verification long ago where it was believed the shortcut to secp256k1 would bring as much as a 20% speed increase to signature verification. By the time it was modified and implemented, in order to protect security, there was almost no speed advantage. Removing the sig verification from the mined blocks will most likely have some kind of security leak issue. I'm just not knowledgable enough to tell you what it will be. I'll be eagerly watching the development.
The data comes after the block and is connected via a hash IIRC. I don't think it increases the possibility of address collision why would it? Apparently it has been in testing mode for 6 months now, and I'm pretty sure that they would not miss a significant security leak just like that. Besides, they won't be rushing this out either way. For the exact specifics I'll have to get back to this thread as I'm very busy now and will head out (possibly stay disconnected). 
legendary
Activity: 2156
Merit: 1393
You lead and I'll watch you walk away.
But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.
You are correct. However, I'm not talking about removing the signature data; I said excluding from the blocks.
Quote
Wouldn't it be nice to just drop the signatures? The reason why we can't do this is because the signature is part of the transaction hash. If we would just drop the sig from the transaction, the block wouldn't validate, you wouldn't be able to prove an output spend came from that transaction, so that's not something we could do.
Quote
You get a size increase because you no longer store the signatures in the block, you just have all your signatures empty and reference an output like [hash] OP_TRUE, where [hash] is the script hash to execute. Then you can sign for the transaction with an empty script sig. Data for the signature is held outside of the block, and is referenced by a hash in the block (probably in the sigScript of the coinbase transaction). Because the signature data isn't part of the real block, you can make the block+extra sig data be more than 1 MB.
It does not eliminate multisig, it actually solves malleability as I've previously stated and as seen on the slide.

Ok, so how are the transactions signed and does it increase the possibility of address collision? Hal Finley proposed batch signature verification long ago where it was believed the shortcut to secp256k1 would bring as much as a 20% speed increase to signature verification. By the time it was modified and implemented, in order to protect security, there was almost no speed advantage. Removing the sig verification from the mined blocks will most likely have some kind of security leak issue. I'm just not knowledgable enough to tell you what it will be. I'll be eagerly watching the development.
legendary
Activity: 2674
Merit: 2965
Terminated.
But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.
You are correct. However, I'm not talking about removing the signature data; I said excluding from the blocks.
Quote
Wouldn't it be nice to just drop the signatures? The reason why we can't do this is because the signature is part of the transaction hash. If we would just drop the sig from the transaction, the block wouldn't validate, you wouldn't be able to prove an output spend came from that transaction, so that's not something we could do.
Quote
You get a size increase because you no longer store the signatures in the block, you just have all your signatures empty and reference an output like [hash] OP_TRUE, where [hash] is the script hash to execute. Then you can sign for the transaction with an empty script sig. Data for the signature is held outside of the block, and is referenced by a hash in the block (probably in the sigScript of the coinbase transaction). Because the signature data isn't part of the real block, you can make the block+extra sig data be more than 1 MB.
It does not eliminate multisig, it actually solves malleability as I've previously stated and as seen on the slide.
legendary
Activity: 2156
Merit: 1393
You lead and I'll watch you walk away.
But doesn't the signature verify the transaction was created by the real owner of the address? What about multisig? Is that gone with this new system? Sounds very flaky to me.
legendary
Activity: 2674
Merit: 2965
Terminated.
Lauda, explain Segregated Witness to me like I'm five.
It's a bit hard to correctly explain something so complex without leaving out important information. Let me try this: "Normally the transactionID is the hash of the signature and transaction", with the segregated witness the signatures are being excluded (as they consume 60% of the data on the blockchain now). In other words, they are going to re-work how this data is being stored (simplistic explanation without merkle tree) by excluding it from the block.

The positive outcome of this is an effective block-size of 4 MB with a soft fork. With effective I mean that they don't have to change the actual block size (that most people know of today).


And to me as if I'm just born
Read the bolded part. Addition: By changing how the data is stored, they are saving a lot of space (hence the effective block-size of 4 MB).
hero member
Activity: 924
Merit: 1005
4 Mana 7/7
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born
legendary
Activity: 2156
Merit: 1393
You lead and I'll watch you walk away.
Lauda, explain Segregated Witness to me like I'm five.
legendary
Activity: 1162
Merit: 1004

From what I understood is that they could discount the witness data by 75% right now, which means that 1 MB blocks could theoretically have as much transaction volume as 4 MB blocks. Or they increase the block size for the witness part to 4 MB (the non-witness part stays at 1 MB). This is how I understood it so far. This is still a fairly new concept so I'm also still learning.



Still learning; but making a post titled "Segregated witness - The solution to scaling".
legendary
Activity: 2674
Merit: 2965
Terminated.
If this is correct and witness information is prunable, how is it a solution to scaling? It would still require a block size increase. Maybe I thought I knew what scaling is, but I'm not quit grasping the concept...
Scaling is not all about the block size what majority understand. Scaling could be a better way of storing data, a different level approach like LN, or something else.
So, for a lay person like me, this is basically a simple, efficient way to patch the malleability exploit while simultaneously increasing block size? And to do that we only have to make a soft fork?
Is this correct?
Simple? Not exactly. This adds complexity, which BIP101 and XT supporters are probably going to use as an argument. If society wanted simplicity and not harness the benefits that came with complexity, we should have remained at the stone age. This kills all cases of unintentional malleability and can be implemented with a soft fork. This is correct.
legendary
Activity: 1442
Merit: 1016
So, for a lay person like me, this is basically a simple, efficient way to patch the malleability exploit while simultaneously increasing block size? And to do that we only have to make a soft fork?
Is this correct?

legendary
Activity: 1512
Merit: 1012
If this is correct and witness information is prunable, how is it a solution to scaling? It would still require a block size increase. Maybe I thought I knew what scaling is, but I'm not quit grasping the concept...
legendary
Activity: 2674
Merit: 2965
Terminated.
I don't think there's a thread about this yet (after conference), so here it is.


Here is a transcript of the presentation. A hard fork is possibly not even required.

Gavin's explanation:

Quote
Pieter Wuille gave a fantastic presentation on “Segregated Witness” in Hong Kong. It’s a great idea, and should be rolled into Bitcoin as soon as safely possible. It is the kind of fundamental idea that will have huge benefits in the future. It also needs a better name (“segregated” has all sorts of negative connotations…).

You should watch Pieter’s presentation, but I’ll give a different spin on explaining what it is (I know I often need something explained to me a couple different ways before I really understand it).
So… sending bitcoin into a segregate witness-locked output will look like a weird little beastie in today’s blockchain explorers– it will look like an “anyone can spend” transaction, with a scriptPubKey of:
PUSHDATA [version_byte + validation_script]

Spends of segregated witness-locked outputs will have a trivial one-byte scriptSig of OP_NULL (or maybe OP_NOP – There Will Be Bikeshedding over the details).

The reason that is not insane is because the REAL scriptSig for the transaction will be put in a separate, new data structure, and wallets and miners that are doing validation will use that new data structure to make sure the signatures for the transaction are valid, etc.

That data structure will be a merkle tree that mirrors the transaction merkle tree that is put into the block header of every block. Every transaction with a segregated witness input will have an entry in that second merkle tree with the signature data in it (plus 10 or so extra bytes per input to enable fraud proofs).

The best design is to combine the transaction and segregated witness merkle trees into one tree, with the left side of the tree being the transaction data and the right side the segregated witness data. The merkle root in the block header would just be that combined tree. That could (and should, in my opinion) be done as a hard fork; Pieter proposes doing it as a soft fork, by stuffing the segregated witness merkle root into the first (coinbase) transaction in each block, which is more complicated and less elegant but means it can be rolled out as a soft fork.

Regardless of how it is rolled out, it would be a smooth transition for wallets and most end-users– if you don’t want to use newfangled segregated witness transactions, you don’t have to. Paying to somebody who is using the newfangled transactions looks just like paying to somebody using a newfangled multisig wallet (a ‘3something’ BIP13 bitcoin address).

There is no requirement that wallets upgrade, but anybody generating a lot of transactions will have a strong incentive to produce segregated witness transactions– Pieter proposes to give segregated witness transactions a discount on transaction fees, by not completely counting the segregated witness data when figuring out the fee-per-kilobyte transaction charge. So… how does all of this help with the one megabyte block size limit?

Well, once all the details are worked out, and the soft or hard fork is past, and a significant fraction of transactions are spending segregated witness-locked outputs… more transactions will fit into the 1 megabyte hard limit. For example, the simplest possible one-input, one-output segregated witness transaction would be about 90 bytes of transaction data plus 80 or so bytes of signature– only those 90 bytes need to squeeze into the one megabyte block, instead of 170 bytes. More complicated multi-signature transactions save even more. So once everybody has moved their coins to segregated witness-locked outputs and all transactions are using segregated witness, two or three times as many transactions would squeeze into the one megabyte block limit.

Segregated witness transactions won’t help with the current scaling bottleneck, which is how long it takes a one-megabyte 'block’ message to propagate across the network– they will take just as much bandwidth as before. There are several projects in progress to try to fix that problem (IBLTs, weak blocks, thin blocks, a “blocktorrent” protocol) and one that is already deployed and making one megabyte block propagation much faster than it would otherwise be (Matt Corallo’s fast relay network).

I think it is wise to design for success. Segregated witness is cool, but it isn’t a short-term solution to the problems we’re already seeing as we run into the one-megabyte block size limit.
Pages:
Jump to: