Pages:
Author

Topic: Segregated witness - The solution to Scalability (short term)? - page 13. (Read 23163 times)

legendary
Activity: 1260
Merit: 1002
so, core devs are now being racists? Tongue


(i dont get the sig being "segregated" to some soft (alt?!) fork.. aren't sigs a very basic and important 'feature' for Bitcoin?)
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
segwit is indeed providing more capacity and scalability and thus is part of the puzzle in scaling bitcoin.  

As far as I can tell, the only component of the omnibus SegWit proposal that does anything about capacity or scalability is a simple increase of the block size to 4MB (I presume he means 4MiB). You can doublespeak this as "Discount the signature by 75% for block size" if you want, but that's really all it is.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.

It's weirder than that. The 'drastic change' (i.e. moving the signatures to a separate data structure) does absolutely nothing to address scalability for fully validating nodes. To fully validate, such nodes need all the block data and all the signature data. No reduction there. I merely reduces demands on _non-validating_ nodes, by a factor of 1.8x or so.

What the entire SegWit proposal does to address scalability at fully validating nodes is not the segregation, but rather a simple _increase_in_the_block_size_. In Wuille-speak, this is represented as "Discount witness data by 75% for block size Or: block limit to 4MB, but only for witness".

At least as far as I can tell.

Bait & switch?

standard disclaimer: I have an incomplete view of SegWit at this time.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that
Understanding can be of different levels: conceptual, algorithmic, implementational... I bet most people don't quite grasp how Bitcoin's Script stack machine is implemented, though it doesn't prevent them from utilising it, if they know conceptually at least. What's enough for most people is that a particular component has been peer-reviewed thoroughly to prove it's safe to use it.

Indeed, during early days of bitcoin, developers have much more freedom to do whatever they want, partly due to that no one cares about it, and partly due to that there are no major interested stake holders because of its low value

But now situation is different, the network has attracted so much venture capital and investors, these guys all have their own agenda thus the political landscape has changed. A good example is kncminer, they took the crowd funding money, realized their projects and start to drive their own mining operation secretly

At this stage, posting on a forum or reddit or checkin some code in git does not make a lot of sense, because the decision making power is not in the hand of developers, but in the hand of large mining pools, exchanges and payment processors. If devs represent a complex solution which those large players do not understand thoroughly, they would just ignore it (they have to protect their million dollar investment as best as they can). They could just keep running the old client, and build their clearing and settlement channel to avoid the scaling problem altogether

Imagine that when the blocks are full and each transaction cost a lot to clear, then only large service providers would be able to use blockchain to clear with their business partners. Users will find out that using web wallet services will cost just a few cents as usual and clears instantly, but using core client will cost $100 and maybe confirmed after 1 day, so they will definitely move to use blockchain.info or similar web wallet instead

You see, this is also a solution, since the risk on individual service provider is much smaller than the risk of the whole network, it can be accepted. And this solution is much easier for every investor to understand than that Segregated witness complication. In fact, most of the people are still very used to centralized service provider, so they would accept a locally centralized solution easily

The best scenario is that all the large players out there have deep IT expertise and can easily get what those new changes' pros and cons, but in my experience it is not the case. Rich people have totally another set of criteria in decision making
legendary
Activity: 994
Merit: 1035
so segwit WILL NOT resolve scaling.. because upping the limit is just the standard thing to do and not special feature segwit is offering.

Which developer is claiming it will "resolve" scaling? segwit is indeed providing more capacity and scalability and thus is part of the puzzle in scaling bitcoin.  

You are ignoring the nuanced benefits SW provides(vs simply increasing the block limit) that allow for better scalability in the future:

- one benefit with SW is full nodes could also skip transferring old signatures which is an unnecessary task.(Existing full nodes already do not validate signatures in the far past but still have the burden of transfering them)

- resolves Tx malleability which is an important step that needs to be accomplished to roll out LN. Yes there are numerous ways to fix tx malleability but this is a simple and elegant one.

- Allows for lite nodes to have fraud proofs where we are adding an extra layer of security to potentially compensate for further centralization of full nodes due to partially increasing block limit

You appear to be insinuating that we should just take the simpler approach and increase blocklimit.... which is something the core devs are suggesting we do in addition to segwit when needed. Why don't they simply increase the blocklimit? Because of the benefits cited above that increasing the blocklimit doesn't resolve.
legendary
Activity: 4396
Merit: 4755
ok
full nodes (the real bitcoin-core) that mining POOL operators and true bitcoin fanboys will keep needing to store both tx data and signatures..
thus to them changing Block=1mb into blockA=0.25mb blockB=0.75mb makes no difference. its still 1mb bloat per blocktime..
thus to them changing Block=4mb into blockA=1mb blockB=3mb makes no difference. its still 4mb bloat per blocktime..
you can paint certain data any colour.. it doesnt make it invisible to full nodes
you can put certain data into different drawers.. it doesnt make the cabinet any lighter

secondly miners (not pool operators) dont need the full blockchain.. unscrew a mining rig and you will see no 60gb hard drive.. so yea miners do not care, they know how to grab what they need to do the job, and how the data is saved means nothing to them..

thirdly lite users. can easily code a liteclient right now(without protocol changes) that can read the blockchain and simply not save the signature part of the json data to file so they dont even need anything new to do this right now.. and in actual fact anyone wanting to not download bitcoin core. definitely aint going to want to have 20gb of lite segwit blockchain either... its an "all or nothing" game.. not something in the middle.

all i can see is that talking to a 5 year old
kid(lite):"mum theres pea's(sig) on my plate i just want the meat(tx), i dont want the pea's(sig)"
mom(full node):"ok here is a bigger plate. let me put everything on it.. and now move the pea's to the side. now shut up and grab your meat in your lite hands and ignore the pea's"
kid:"mom there is still pea's on the plate, every day you are still going to cook(store both) meat and pea's and all you are doing is putting it on a bigger plate, telling me i can just take the meat. your not helping yourself because your still making pea's. yea i know i will never eat(store) pea's, but you know you cant take the pea's off the plate because all the other moms will tell you its not a healthy(verified) meal. yes i can just grab the meat and eat it from my light hands separately but i could have done that anyway... but just putting it on a bigger plate means nothing.. if you think it means you can now cook 10x more meat you have to realise that you still end up cooking more pea's aswell.. if there is more meat theres more peas, simple fact.. you have not solved never needing moms to cook pea's nor have you solved me not needing to grab the meat off the main plate as i could always do it, even if you tell me that its on 2 plates and i only see the plate with the meat on it you have still cooked meat and pea's"


so segwit WILL NOT resolve scaling.. because upping the limit is just the standard thing to do and not special feature segwit is offering. the meat and pea ratio will still be there mining still will produce meat and pea's databloat for true nodes. you just increasing the meat and pea's which is no different than just making a larger limit..

using gavins example
Quote
Well, once all the details are worked out, and the soft or hard fork is past, and a significant fraction of transactions are spending segregated witness-locked outputs… more transactions will fit into the 1 megabyte hard limit. For example, the simplest possible one-input, one-output segregated witness transaction would be about 90 bytes of transaction data plus 80 or so bytes of signature– only those 90 bytes need to squeeze into the one megabyte block, instead of 170 bytes. More complicated multi-signature transactions save even more. So once everybody has moved their coins to segregated witness-locked outputs and all transactions are using segregated witness, two or three times as many transactions would squeeze into the one megabyte block limit.
wrong

bitcoin-core users will still have 170bytes per tx.. whether you want to colour 90byte green and colour 80byte red, its still 170byte saved to full nodes hard drives
trying to con people into thinking that making a plate 4 times bigger and saying oh look you can fit 8x more green bytes.. is just wrong.. full node blocks will still be the same 170byte total all that is happening is splitting the chain into two and branding the green chain as "bitcoin" and the red chain as "please dont look"
but full nodes will still be holding both chains and thus the total data a full node stores is still 170bytes on a basic tx...

so take a 2014 simple tx of 170bytes. thats 5882 tx a block
so just up the block limit to 4mb. 23529 tx a block

now seg wit
simple tx of A=90 B=80 full node storage is still 170byte = 23529 tx per 4mg block. but segwit lite clients storage is 2.117mb for 23529 tx segwit block

lite clients could have 90byte per tx. but their chain is not the real chain and wont help the network security nor will it help lite users that dont want any bloat
lite clients wont be part of the network security and so this is not a solution to help real network supporting users (bitcoin core), its not helping lite users either

lite clients can already have 90byte just by looking at a full tx and ignoring the json strings they dont need when saving files.
ive been doing it for years now. as my lite client only grabs tx data of addresses the client holds. and just saves the txid's, vins vout's and values.. lite clients wont want to store 20gb of useless history that doesnt help the network.. they either want full history to protect the network which they can verify, or just data that applies to them specifically to sing transaction, which is far far less than 20gb

having 20gb of non secure tx data is not a lite client. its a medium weight client. which to be honest ill say it again. anyone can make their own medium weight client right now. only saving part of the json data to file without doing anything special to bitcoins protocol.

so now onto the malleability..
once tx is confirmed.. its locked into history.. and then when segwit grabs just a portion of the block data.. ofcourse is malle proof.. BECAUSE ITS ALREADY CONFIRMED!
which is the same as anyone grabbing tx data on confirmed transactions has the same malle proof,..
now onto bandwidth
segwit lite clients will not just relay 90byte of unconfirmed tx's, as mining pools need the whole thing and each relay needs to check it.. so segwit will still transmit full 170bytes. full nodes will still store/transmit 170bytes too, and thus its not helping bandwidth of the network.

anyone right now can create a client that only grabs txid, vins vouts and values of relevant addresses of a user.. right now without any soft or hard forks..
i still cant see why people think segwit is so special..

summary
i still cannot rationalise why bitcoin-core needs to split the blockchain. just for useless lite clients..who are not going to help the network.. nor want any bloat
lite clients can more effectively grab the json data, put the json strings into individual variables.. and then just not save the signature variable to file..
this to me seems like a dysfunctional attempt at a solution
far easier to just keep the chain as 1 chain. and just put code in to raise limit to 4mb and solve the malleability by having code that ignores relayed tx variant if same vin has already been relayed by another tx saved in mempool, thus stopping people using the same vin until its confirmed(goodbye doublespend)
hero member
Activity: 798
Merit: 1000
Move On !!!!!!
No solution can be final in this. There is always going to be a need for more upgrades. I don't see how that could possibly be an argument.

And that's OK! What we need now is buy some time, get something done and observe how this solution is impacting the whole network.

Also a good message needs to be sent out to the whole community that something is being done towards a long term solution of this problem!
legendary
Activity: 2674
Merit: 2965
Terminated.
No solution can be final in this. There is always going to be a need for more upgrades. I don't see how that could possibly be an argument.
staff
Activity: 4256
Merit: 1208
I support freedom of choice
but this will not solve the problem completely, when we need to increase again the block in the future, it will only delay it
https://www.reddit.com/r/bitcoinxt/comments/3w2w17/segregated_witness_is_cool_gavin_andresen/#cxt01bu
legendary
Activity: 3248
Merit: 1070
Lauda, explain Segregated Witness to me like I'm five.
And to me as if I'm just born

At the moment everything goes in the block.

With segwit, only the important stuff goes in the block. The other stuff goes into an 'attachment'.

This way more transactions can be put into a full block without increasing the blocksize limit.


but this will not solve the problem completely, when we need to increase again the block in the future, it will only delay it

in this case it seems that we have a margin of 3 more mega, it will be effectively like having a block of 4MB, but when we need 5MB we will be forced to increase the block anyway

therefore this is only a temporary solution...one problem at time i understand...
legendary
Activity: 1386
Merit: 1000
English <-> Portuguese translations
Quote
and we will continue to be without the base data that garzik's 102 would provide
And also without the 1+ hour long block validations a simple "just increase the constant to 2MB" enable. Smiley


But BIP102 stills a hard fork, with all the stress of needing to everybody upgrade their Bitcoin servers ASAP, no?

And sorry, but why the blocks would need more than 1 hour to validate? This segregate witness proposal is that bad?
legendary
Activity: 1162
Merit: 1004
It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.


Yes, this is a very interesting scaling strategy. Quadruple the cap to get a double throughput is okay. Quadruple the cap to get a quadruple throughput is not okay.
staff
Activity: 4242
Merit: 8672
My fear is that we will be into 2017 before anything is deployed,
I don't think you have to worry about that.

Quote
and we will continue to be without the base data that garzik's 102 would provide
And also without the 1+ hour long block validations a simple "just increase the constant to 2MB" enable. Smiley
legendary
Activity: 1386
Merit: 1009
Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that
Understanding can be of different levels: conceptual, algorithmic, implementational... I bet most people don't quite grasp how Bitcoin's Script stack machine is implemented, though it doesn't prevent them from utilising it, if they know conceptually at least. What's enough for most people is that a particular component has been peer-reviewed thoroughly to prove it's safe to use it.

I still don't really understand how that can be implemented as a soft fork. Softfork means backward compatible, when the upgraded SW clients broadcast new blocks through out the network, how come the original core client can accept such kind of strange block which does not contain signature data?
There are two modifications to be made for it to be soft-fork compatible:
1) SW outputs are made anyone can spend, so that for older clients it won't matter how they are spent, the scriptSig will be empty.
2) The merkle tree root of SW data hashes is stored in the coinbase.
sr. member
Activity: 392
Merit: 250
It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.

SegWit is widely agreed to be a net positive to incorporate into Bitcoin (especially if it can kill malleability problems), but the burden of vetting and testing should be much more involved than a one line patch like BIP102. My fear is that we will be into 2017 before anything is deployed, and we will continue to be without the base data that garzik's 102 would provide. And, the precedent that "hard forks r bad n scary" would still be firmly in place, and would be rolled out to stifle any possibility of future main chain capacity growth.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
My worry here is that we seem to have a large cadre of proponents of this new feature that are not able to articulate answers to reasonable questions. I see a lot of demurring of the nature of "perhaps the devs can come by and explain it better". It makes be think that perhaps these proponents likewise don't understand the details of what is being proposed deeply enough to understand the implications upon the questions being asked.

Exactly. If a solution is not understandable for users with average IT expertise, then it will never be understandable for anyone with even less IT knowledge. And typically owners of large mining farms and exchanges do not have time to do those learning, so they tends to select the solution that they can understand or listen to people they like. This will turn the decision making into politics, and who are good at lobbying and PR will push in their changes. And this is not people would like to see in bitcoin. So, the knowledge gap of different participants decided that you really can't reach a wide consensus upon a radical or complex solution, XT's failure already proved that

I still don't really understand how that can be implemented as a soft fork. Softfork means backward compatible, when the upgraded SW clients broadcast new blocks through out the network, how come the original core client can accept such kind of strange block which does not contain signature data?
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
Maybe ONE answer - but seemingly a short-term minor fix even if all claims are validated - certainly not THE answer.

I agree with you here also, less caps-lock and I'll be convinced you're not shouting at anyone Smiley

Thanks. I wasn't meaning to shout, so much as to ensure my emphasis (on two words in a single post? c'mon) survived any quoting.
legendary
Activity: 3430
Merit: 3080
The difference I see is that when I entered the Bitcoin world, it was already a demonstrably working system. This SegWit thing, OTOH, which is merely said to have been tested, has in my mind the burden of proof. Is it an answer to the scalability issues? Maybe ONE answer - but seemingly a short-term minor fix even if all claims are validated - certainly not THE answer.

I agree with you about the scaling issues. It's not going to give us the kind of scaling needed to serve billions on it's own. But, the reasoning from Peter Wuille is that it lays important groundwork for scaling up to billions of users, at the same time as providing ~3.5x the transaction rate for the immediate term.

So by all means, let us investigate the efficacy. But in the meantime, let's not shout down those that are asking reasonable questions, and let us not argue for this on the mere appeal to authority. That is not how one sciences.

I agree with you here also, less caps-lock and I'll be convinced you're not shouting at anyone Smiley
legendary
Activity: 2674
Merit: 2965
Terminated.
Lauda, can you update the OP with the detailed explanation of the idea?
I'm sometimes just too lazy to search it to remeber exactly how would be made this idea.
I've added Gavin's explanation with a link to it. Hopefully that helps everyone. I've also updated the thread title by adding a question mark; hopefully now it fits better.
legendary
Activity: 1386
Merit: 1000
English <-> Portuguese translations
Lauda, can you update the OP with the detailed explanation of the idea?
I'm sometimes just too lazy to search it to remeber exactly how would be made this idea.
Pages:
Jump to: