Pages:
Author

Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF - page 7. (Read 21405 times)

legendary
Activity: 1176
Merit: 1134
The modified hash only applies to signature operations initiated from witness data, so signature operations from the base block will continue to require lower limits.

The way it is worded makes it sound fantastic...

However, I couldnt find info about the the witness data immunities from the attacks. Are you saying that signature attacks are not possible inside the witness data?

Clearly if signatures are moved from location A to location B, then saying signature attacks are not possible in location A. OK, that is good. but what about location B?

Are sigs in the witness data immune from malicious tx via lots of sigs? It is strange this isnt specifically addressed. Maybe its just me and my low reading comprehension. But all the text on that segwit marketing page seems quite one sided and of the form:

###
things are removed from the base blocks so now there are no problems with the base block, without addressing if the problems that used to be in the base block are actually solved, or just moved into the witness data.
###

We could easily say SPV solves all signature attack problems. Just make it so your node doesnt do much at all and it avoids all these pesky problems, but the important issue to many people is what is the effect on full nodes. And by full, I mean a node that doesnt prune, relays, validates signatures and enables other nodes to do the bootstrapping.

Without that, doesnt bitcoin security model change to PoS level? I know how much you hate PoS

James
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

The initial depoyment of SegWit will not enable Schnorr signatures, will it? Won't they require a hard fork anyway?

Even with Schnorr signatures, the miners would still have to accept old-style multisigs produced by old clients, right?  Then an attacker could still generate those hard-to-validate blocks, no?

As a temporary fix, a soft fork can be deployed limiting the max number of signatures.  Even a low limit like 100 is no restriction, only a small annoyance for the few users who would want to use more   It woudl be a good use of an "arbitrary numerical limit", like the 1 MB limit was when it was introduced.  

But there is no logical reason why signature validation should take quadratic time.  That is a bug in the protocol, that should be fixed by changing the algorithm -- with a hard fork if need be.

(By the way,  [for a couple of hours today](https://statoshi.info/dashboard/db/transactions?from=1458258715516&to=1458259562505) there was an apparent "stress test" where each transaction was 10 kB long (rather than the usual 0.5 kB).  Was the "tester" trying to generate such troll blocks?)

Good questions.  Let's try reading the fantastic manual:

https://bitcoincore.org/en/2016/01/26/segwit-benefits/#linear-scaling-of-sighash-operations

Quote
Linear scaling of sighash operations

A major problem with simple approaches to increasing the Bitcoin blocksize is that for certain transactions, signature-hashing scales quadratically rather than linearly.

Linear versus quadratic

In essence, doubling the size of a transaction increases can double both the number of signature operations, and the amount of data that has to be hashed for each of those signatures to be verified. This has been seen in the wild, where an individual block required 25 seconds to validate, and maliciously designed transactions could take over 3 minutes.

Segwit resolves this by changing the calculation of the transaction hash for signatures so that each byte of a transaction only needs to be hashed at most twice. This provides the same functionality more efficiently, so that large transactions can still be generated without running into problems due to signature hashing, even if they are generated maliciously or much larger blocks (and therefore larger transactions) are supported.
Who benefits?

Removing the quadratic scaling of hashed data for verifying signatures makes increasing the block size safer. Doing that without also limiting transaction sizes allows Bitcoin to continue to support payments that go to or come from large groups, such as payments of mining rewards or crowdfunding services.

The modified hash only applies to signature operations initiated from witness data, so signature operations from the base block will continue to require lower limits.
hero member
Activity: 910
Merit: 1003
Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

The initial depoyment of SegWit will not enable Schnorr signatures, will it? Won't they require a hard fork anyway?

Even with Schnorr signatures, the miners would still have to accept old-style multisigs produced by old clients, right?  Then an attacker could still generate those hard-to-validate blocks, no?

As a temporary fix, a soft fork can be deployed limiting the max number of signatures.  Even a low limit like 100 is no restriction, only a small annoyance for the few users who would want to use more   It woudl be a good use of an "arbitrary numerical limit", like the 1 MB limit was when it was introduced. 

But there is no logical reason why signature validation should take quadratic time.  That is a bug in the protocol, that should be fixed by changing the algorithm -- with a hard fork if need be.

(By the way,  [for a couple of hours today](https://statoshi.info/dashboard/db/transactions?from=1458258715516&to=1458259562505) there was an apparent "stress test" where each transaction was 10 kB long (rather than the usual 0.5 kB).  Was the "tester" trying to generate such troll blocks?)
hero member
Activity: 910
Merit: 1003
Also "Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit."

That is not quite true.  After a soft fork, old clients may issue transactions that are invalid by the new rules, and not understand why they are never confirmed.  A soft fork can also introduce new ways of storing transactions in the blockchain, implicitly or explicitly, that are invisible to old clients, as in this example.   In this case, the old clients will not see coins that new clients send them. 

Quote
90% could be against segwit, yet they are the ones excluded from a fully verified blockchain.

Yes. 

More precisely, if 51% of the miners decide to do a soft fork, the soft fork happens -- even if no one was told about it in advance -- and all other miners and clients have to accept it. 

Quote
"if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit."

Not exactly.

With a hard fork, all users would have to be warned in advance, with explanation of what the change is and why it is a good idea. 

I there is not enough support from the miners, the hard fork does not happen and nothing changes.

If there is enough support from the miners to execute the hard fork, the users would have to be warned again to upgrade to a version that is at most K releases old by date D.  Hopefully, most everybody will convert in time, and then the few lagards will be unable to use their coins until they upgrade too. 

However, if a substantial minority *of the miners* remains absolutely opposed to the changes, the coin will split into new-rule and old-rule coins.  Each user will see his own coins replicated in both branches, and will be able to use both  independently.  Is freedom of choice such a bad thing?
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
Core could propose a 2mb hard limit be implemented first, as 2mb is also on their to do list? (nothing else. no other general tweeks)

Absent the Schnorr sigs enabled by segwit, 2mb blocks would require "other general tweeks" in the form of restricting old style quadratic scaling sigs to some magic number maximum.

Even Gavin concluded that was a Bad Idea, because otherwise we get obnoxiously construed troll blocks that take a minute or longer to process.  And that means more empty blocks, because miners aren't going to stop mining while the troll blocks complete validation sigops.

Please stop suggesting and advocating nonsense and misinformation about "The One Simple Trick To Scale Bitcoin That Core Doesn't Want You To Know."
hero member
Activity: 812
Merit: 1001
If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.

Abandon/postpone segwit sf? (How) do you think that could happen?


It could happen. (i particually speak of a soft fork. segwit HF has different connotations)

How? oh, erm..

Core could pro-actively declare segwit a work in progress undergoing rigorous testing.
Due to this extended development time, Core could propose a 2mb hard limit be implemented first, as 2mb is also on their to do list? (nothing else. no other general tweeks)

Or maybe the test net will crash the night before launch, forcing segwit release to be abandoned last miniute.

Or possibly users will become more aware of the implications of segwit SF, or fear segwit it is not fully tested yet or needed, and rise up.

Could even be that segwit coding naturally runs into bugs, leading to delays, and again. Users loose patience and fork to classic.


So, I think it could happen in many ways.
How will possibly depend on core, balls in their court atm. Or users if they get restless, or the interaction between the two, or a tech failure.

However it should be abandoned as a SF.
"90% could be against segwit, yet THEY are the ones excluded from a fully verified blockchain." That is an attack.
(if 90% want segwit, then prove it. HF)
Why is core attacking bitcoin?



donator
Activity: 2772
Merit: 1019
If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.

Abandon/postpone segwit sf? (How) do you think that could happen?
hero member
Activity: 812
Merit: 1001
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)

I didn't mean "do segwit as a hardfork", I meant do a hf that achieves the same things (more capacity, malleability fix, bandwidth savings, prune signatures from storage,...) just more -- let's say -- directly. A package with something for everybody but nothing too bad for anybody to swallow. A compromise.

That's why I was asking wether the "change of economic model" (which would be missing from that package) was something core devs couldn't live without. So far I haven't seen this desirability in itself argued, seemed to me this was understood by everyone as just a side-effect of soft-forking higher capacity.


Ok a compromise,

(but would that mean effectively start coding from scratch? timewise, could that happen now, sensible as it may sound, as expectations of some sort of block size increase soon have been stoked.)


Either way, why should segwit SF be abandoned?
Indeed, because of the "change of economic model". But particularly through a SF.
segwit SF is an attack on bitcoin. More so than a segwit HF.

knightdk says "It was originally proposed as a hard fork, but someone (luke-jr I think) pointed out that it could be done as a soft fork."

luke jr found a technical fix to enable the possibility of a segwit SF.

Also "Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit."

(functionality, i knew i'd seen a desire argued somewhere)
If he didn't upgrade he wont be able to verify segwit tx's.
Trust is now introduced to his blockchain.
That is an ill effect, and would "normally" require HF.
90% could be against segwit, yet they are the ones excluded from a fully verified blockchain.
And segwit will always be on the blockchain.
That goes against all the principles of bitcoin I thought I knew.


And "if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit."

He cannot force everyone to use segwit through HF
(any more than XT could force anyone to upgrade and adopt)
Everyone would be required to upgrade, or not if they didn't want to.
If segwit was not wanted, he would loose the fork and segwit would be gone.
He cannot force the majority to do anything.


If/(when) segwit SF is abandoned/(postponed), a 2mb increase, in its simplest safe form, should be implemented. (no partisan additions)
Is that compromise?

That will buy time for a proper re-assessment for all in the Bitcoin space.



legendary
Activity: 1232
Merit: 1094
What do you mean? They are committed in the same block, at the same time, right?

Yes, but with a separate merkle tree with the root in the coinbase.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
That may be a good argument to phase out nLocktime in favor of CLTV.

Huh?

You do realise that CLTV actually checks the nLocktime (hence its name) so if you got rid of nLocktime then it wouldn't do anything at all?

Also scripts can exist "outside the blockchain" (signed but not broadcast which was the very point being made about nLocktime) so you can't rely upon at what block they appear to determine the rules at all.
hero member
Activity: 910
Merit: 1003
The non-signed transactions are committed separately from the signatures. 

What do you mean? They are committed in the same block, at the same time, right?

Quote
Script versioning means that it is easier to change the script language. 

The position of a transaction in the blockchain should define which version of the rules is applicable to it (in particular,
which version of the scripting language it uses).
hero member
Activity: 910
Merit: 1003
One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

As far as I see it, if malleability can be fixed in such a way that older versions of the software still see immalleable transactions as valid transactions then, well…  do it.

In a soft fork, by definition, the new version of the software can reject transactions that the revious version considered OK.

For example, IIUC the soft-forked SegWit proposal implies redefining an op code that previously meant "no-op" to mean "check the signatures in the extension record" or something like that.  Thus, a transaction that used that opcode (for some bizarre reason of its own, possibly fraudulent) could be valid before SegWit was enabled, but become invalid after it.

That may be a good argument to phase out nLocktime in favor of CLTV.  Once a transaction is in the blockchain, its position in it defines the rules by which it should be validated, which allows proper handling of old time locks.
legendary
Activity: 1232
Merit: 1094
One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

You could have a rule that you can refer to inputs using either txid or normalized-txid.  That maintains backwards compatibility.  The problem is that you need twice the lookup table size.  You need to store both the txid to transaction lookup and the n-txid to transaction lookup.

The rule could be changed so that transactions starting at version 2 using n-txid and version 1 transactions use txid.  This means that each transaction only needs 1 lookup entry depending on its version number.  If transaction 1 transactions cannot spend outputs from transaction 2 transactions, then the network will eventually update over time.  It is still a hard-fork though.

Segregated witness has additional benefits with regards to data organization.  The non-signed transactions are committed separately from the signatures.  Script versioning means that it is easier to change the script language. 

It looks like they have added improvements to how transaction signing works.
newbie
Activity: 26
Merit: 3
Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.

Yes.  That fix should be done by a hard fork: because the code will be much cleaner, and because hard forks are safer than soft forks. (More precisely: ensuring that old versions are inoperable after 3-4 releases is safer than deploying changes to the protcol without alerting users, and letting them discover later that they must upgrade to understand why their transactions are not confirming anymore.)

I've seen enough in this thread to convince me that that approach would make deployment a disaster for bitcoin. People would lose funds.

One example gmaxwell gives: all presigned nlocktime transactions would be broken. For users keeping these in storage they may well represent a lot of security. Gone... the moment a new version of the software no longer sees the transaction as being valid.

As far as I see it, if malleability can be fixed in such a way that older versions of the software still see immalleable transactions as valid transactions then, well…  do it.
hero member
Activity: 910
Merit: 1003
Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.

Yes.  That fix should be done by a hard fork: because the code will be much cleaner, and because hard forks are safer than soft forks. (More precisely: ensuring that old versions are inoperable after 3-4 releases is safer than deploying changes to the protcol without alerting users, and letting them discover later that they must upgrade to understand why their transactions are not confirming anymore.)
newbie
Activity: 26
Merit: 3
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

On the contrary, rearranging the data in transactions and blocks is an unnecessary and ugly hack to get that effect.  It means hundreds of lines of new code scattered all over the place, in the Core source and wallets, rather than a few lines in one library routine that everybody else can copy.


I think I can see what you are arguing against, but not what for.

Are you suggesting fixing malleability by storing transactions as they are now and omitting signatures from the txid calculation? In effect, a hard fork.
hero member
Activity: 910
Merit: 1003
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

On the contrary, rearranging the data in transactions and blocks is an unnecessary and ugly hack to get that effect.  It means hundreds of lines of new code scattered all over the place, in the Core source and wallets, rather than a few lines in one library routine that everybody else can copy.

Quote
Quote
* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  
This would be no greater and it would have _no_ security at all. The clients would be _utterly_ beholden to the third party randomly selected servers to tell them correct information

If a client fetches a block without signatures, with SegWit or not, he cannot check whether the transactions contained in it were properly signed.  With SegWit, he can check the hash of the non-signature data; but if he is an old client, he will not even be aware that he is not checking the signatures.  

With the special call solution, if the client wants to validate a particular block, he asks for it in full, and then he can validate everything (except the parent link), as now.  The extra call can be implemented with no fork, so clients who do not upgrade, or do not wish to use that special call, will still be able to verify everything as they do now.

In other words, soft-forked SegWit *forces* old clients to fetch only part of the data, and limits them to verify only that part, *without them being aware of it*.  The special call solution lets clients decide case by case whether they want to verify a block or trust the node (that they are already trusting to some extent); and it does not change the behavior or security of existing client software.

The savings would be greater because clients who choose to use this call for old blocks would get fewer data, whereas with soft-forked SegWit everybody would have to fetch old blocks in full, signatures included.
staff
Activity: 4242
Merit: 8672
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.

Quote
* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  
This would be no greater and it would have _no_ security at all. The clients would be _utterly_ beholden to the third party randomly selected servers to tell them correct information and they would have no way to verify it.

I normally don't expect people advocating Bitcoin Classic to put security first, but completely tossing it out is a new turn. I guess it's consistent with the latest validation removal changes in classic.

Quote
* Pruning signature data from old transactions can be done the same way.
Has been for years.
staff
Activity: 4242
Merit: 8672
So far I haven't seen this desirability in itself argued,
Please read the fine thread here.
donator
Activity: 2772
Merit: 1019
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)

I didn't mean "do segwit as a hardfork", I meant do a hf that achieves the same things (more capacity, malleability fix, bandwidth savings, prune signatures from storage,...) just more -- let's say -- directly. A package with something for everybody but nothing too bad for anybody to swallow. A compromise.

That's why I was asking wether the "change of economic model" (which would be missing from that package) was something core devs couldn't live without. So far I haven't seen this desirability in itself argued, seemed to me this was understood by everyone as just a side-effect of soft-forking higher capacity.
Pages:
Jump to: