Pages:
Author

Topic: Segregated witness - The solution to Scalability (short term)? - page 14. (Read 23170 times)

sr. member
Activity: 317
Merit: 1012
Gavin Andresen has posted a pretty good explanation that is not overly technical:

http://gavinandresen.ninja/segregated-witness-is-cool

I like that it seems we are making slow but steady progress in reaching a consensus, my question is why is gavin advocating for a hard fork to do this when soft fork should be enough?

A more elegant implementation but he's supportive of the soft fork as proposed. Interesting point on it being a sidechain earlier, I hadn't thought of it that way but that's kind of right and there's may be ways of exponentially increasing security with parallel chains.
legendary
Activity: 1065
Merit: 1077
Gavin Andresen has posted a pretty good explanation that is not overly technical:

http://gavinandresen.ninja/segregated-witness-is-cool

I like that it seems we are making slow but steady progress in reaching a consensus, my question is why is gavin advocating for a hard fork to do this when soft fork should be enough?

He posted this on the dev mailing list:

Quote

Thanks for laying out a road-map, Greg.

I'll need to think about it some more, but just a couple of initial
reactions:

Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the
coinbase is messy and will just complicate consensus-critical code (as
opposed to making the right side of the merkle tree in block.version=5
blocks the segwitness data).

It will also make any segwitness fraud proofs significantly larger (merkle
path versus merkle path to coinbase transactions, plus ENTIRE coinbase
transaction, which might be quite large, plus merkle path up to root).


We also need to fix the O(n^2) sighash problem as an additional BIP for ANY
blocksize increase. That also argues for a hard fork-- it is much easier to
fix it correctly and simplify the consensus code than to continue to apply
band-aid fixes on top of something fundamentally broken.


Segwitness will require a hard or soft-fork rollout, then a significant
fraction of the transaction-producing wallets to upgrade and start
supporting segwitness-style transactions. I think it will be much quicker
than the P2SH rollout, because the biggest transaction producers have a
strong motivation to lower their fees, and it won't require a new type of
bitcoin address to fund wallets. But it still feels like it'll be six
months to a year at the earliest before any relief from the current
problems we're seeing from blocks filling up.

Segwitness will make the current bottleneck (block propagation) a little
worse in the short term, because of the extra fraud-proof data. Benefits
well worth the costs.

------------------

I think a barrier to quickly getting consensus might be a fundamental
difference of opinion on this:
"Even without them I believe we’ll be in an acceptable position with
respect to capacity in the near term"

The heaviest users of the Bitcoin network (businesses who generate tens of
thousands of transactions per day on behalf of their customers) would
strongly disgree; the current state of affairs is NOT acceptable to them.
--
--
Gavin Andresen

legendary
Activity: 1204
Merit: 1028
Gavin Andresen has posted a pretty good explanation that is not overly technical:

http://gavinandresen.ninja/segregated-witness-is-cool

I like that it seems we are making slow but steady progress in reaching a consensus, my question is why is gavin advocating for a hard fork to do this when soft fork should be enough?
full member
Activity: 174
Merit: 100
It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?



Full nodes need all data, it means signatures as well.



Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.


You can choose to run lite node instead, this does not require the signatures, and you basically have to trust the mined block has all valid transactions (I mean you have to trust the transactions are signed right). It mean you cannot trust 1 confirmation tx with lite node.


BTW, is there any demand for such lite nodes (I mean who would trade security for little bandwich + HDD space, I mean -33% to -75% savings but trusting miners the transactions were signed right...) ?
legendary
Activity: 2674
Merit: 2965
Terminated.
Which is fine. I get that. But 'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?
It is a kind of "return punch" to those BIP101 and XT propaganda posters. Obviously this seems like a very good solution for short term which is going to buy the developers enough time to work out others solutions. In addition to increasing the capacity this proposal has additional benefits which make it even better. These are the things that we need; i.e. better infrastructure and not just changing the block size to random numbers hoping for them to be the right call.

-snip-

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.
You should not be asking that here. You should ask it somewhere where it is very likely that the developers are going to see and answer. Apparently it has been tested for 6 months, so I'm pretty sure that they know those potential attack vectors (or some). Besides, they aren't going to rush this. It should be available on testnet this month IIRC.
legendary
Activity: 1065
Merit: 1077
Gavin Andresen has posted a pretty good explanation that is not overly technical:

http://gavinandresen.ninja/segregated-witness-is-cool
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.
legendary
Activity: 1610
Merit: 1183
'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?

Like anything else in life, if it's difficult to figure out at first, then continuous exposure to the logic of the inner workings will eventually bridge the gap. That's how I learned about Bitcoin; not over 2 days worth of reading/thinking, but over 4 years. I'm still going.

I have been here for a while and as a non coder and pretty average IQ person I have done a good job in understanding the various BIP's in a logical and understandable way but this segregated witness one is probably too abstract for me to understand. It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?

Like anything else in life, if it's difficult to figure out at first, then continuous exposure to the logic of the inner workings will eventually bridge the gap. That's how I learned about Bitcoin; not over 2 days worth of reading/thinking, but over 4 years. I'm still going.

Fair enough. The difference I see is that when I entered the Bitcoin world, it was already a demonstrably working system. This SegWit thing, OTOH, which is merely said to have been tested, has in my mind the burden of proof. Is it an answer to the scalability issues? Maybe ONE answer - but seemingly a short-term minor fix even if all claims are validated - certainly not THE answer.

So by all means, let us investigate the efficacy. But in the meantime, let's not shout down those that are asking reasonable questions, and let us not argue for this on the mere appeal to authority. That is not how one sciences.
legendary
Activity: 3430
Merit: 3080
'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?

Like anything else in life, if it's difficult to figure out at first, then continuous exposure to the logic of the inner workings will eventually bridge the gap. That's how I learned about Bitcoin; not over 2 days worth of reading/thinking, but over 4 years. I'm still going.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
My worry here is that we seem to have a large cadre of proponents of this new feature that are not able to articulate answers to reasonable questions. I see a lot of demurring of the nature of "perhaps the devs can come by and explain it better". It makes be think that perhaps these proponents likewise don't understand the details of what is being proposed deeply enough to understand the implications upon the questions being asked.
Maybe because we've encountered segwit only 2 days ago? Maybe it is because Bitcoin is a learning process and these things take more time as people have their personal lives to attend to? I've already said in the thread that I'm also learning on the go. There's nothing wrong with not knowing the answer to a bit more complex questions.

Which is fine. I get that. But 'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?
legendary
Activity: 2576
Merit: 1087
Those things are way outside the scope of a "ELIJustBorn" Wink
sr. member
Activity: 252
Merit: 251
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

At the moment everything goes in the block.

With segwit, only the important stuff goes in the block. The other stuff goes into an 'attachment'.

This way more transactions can be put into a full block without increasing the blocksize limit.


i am not sure atm.
am i correct that miners dont need the "attachment" and can mine blocks with transactions with the "smaller" chain with proofs and a UTXO?
that would reduce bandwith for them (well a little...they still need to receive tx) as they only have to broadcast smaller blocks without the full tx data. - which would surely help

same goes for nodes: lower storage requirements (at home i would maybe only store my transactions and not all) but who will store and share the full history and why?
is the full history even needed (which IMHO would change bitcoins security model if it is dropped completely)?
legendary
Activity: 2576
Merit: 1087
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

At the moment everything goes in the block.

With segwit, only the important stuff goes in the block. The other stuff goes into an 'attachment'.

This way more transactions can be put into a full block without increasing the blocksize limit.
legendary
Activity: 2674
Merit: 2965
Terminated.
My worry here is that we seem to have a large cadre of proponents of this new feature that are not able to articulate answers to reasonable questions. I see a lot of demurring of the nature of "perhaps the devs can come by and explain it better". It makes be think that perhaps these proponents likewise don't understand the details of what is being proposed deeply enough to understand the implications upon the questions being asked.
Maybe because we've encountered segwit only 2 days ago? Maybe it is because Bitcoin is a learning process and these things take more time as people have their personal lives to attend to? I've already said in the thread that I'm also learning on the go. There's nothing wrong with not knowing the answer to a bit more complex questions.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
My worry here is that we seem to have a large cadre of proponents of this new feature that are not able to articulate answers to reasonable questions. I see a lot of demurring of the nature of "perhaps the devs can come by and explain it better". It makes be think that perhaps these proponents likewise don't understand the details of what is being proposed deeply enough to understand the implications upon the questions being asked.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

The biggest drawback of complexity is it increases the risk of centralization: If only a few guys knows how it works, then if these guys are compromised then the whole system is down. Currently bitcoin is understandable by thousands of developers, but if you go two chain implementation it will take decades to reach that level of understanding, at mean time simple solutions will gain more and more supporters


Do you mean that a trustless system which depends on users trust in the "brainpool"... isn't really a trustless system anymore? But isn't this already the case?

Let me borrow Peter Todd's famous word: If it is already so then why make it worse  Wink


legendary
Activity: 994
Merit: 1035
Ok, so the fully verifying nodes are not going to benefit from the new design, then how does this design improve the communication speed between fully verifying nodes (which is the bottleneck of the current design) ?

Besides all the benefits already described, one benefit with SW is full nodes could also skip transferring old signatures which is an unnecessary task.(Existing full nodes already do not validate signatures in the far past)
legendary
Activity: 2338
Merit: 1124

The biggest drawback of complexity is it increases the risk of centralization: If only a few guys knows how it works, then if these guys are compromised then the whole system is down. Currently bitcoin is understandable by thousands of developers, but if you go two chain implementation it will take decades to reach that level of understanding, at mean time simple solutions will gain more and more supporters


Do you mean that a trustless system which depends on users trust in the "brainpool"... isn't really a trustless system anymore? But isn't this already the case?
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
The biggest drawback of complexity is it increases the risk of centralization: If only a few guys knows how it works, then if these guys are compromised then the whole system is down. Currently bitcoin is understandable by thousands of developers, but if you go two chain implementation it will take decades to reach that level of understanding, at mean time simple solutions will gain more and more supporters

jonnyj, your avatar text has always read "Beyond Imagination". May I submit that you have gone too far. Come back!

Thanks, but that's Pieter's imagination to re-design bitocin:

"What if we could redesign Bitcoin from scratch? What if you're designing an altcoin, there's really no reason why you would want to do this in Bitcoin. This is actually something we did in sidechain alpha."

Quote
So far, I was talking hypothetically about the scheme presented so far, because the deployment would not be easy. All transaction data structures would have to be changed, which is a huge deployment friction. (...) This seemed like a hard problem. I personally dismissed this as a solution for a long time as something non-viable, until Luke-Jr discovered that it's possible to do this as a soft-fork.

Yes, soft fork is a much better way to bring in a change, get it thoroughly tested and maybe becomes a hard fork when majority of users have tested it. It also takes time to test. Bitcoin has been tested in live traffic for almost 7 years to reach today's maturity, any large change would also need such kind of test by time
Pages:
Jump to: