Pages:
Author

Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF - page 11. (Read 21394 times)

legendary
Activity: 2576
Merit: 1087
I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

A hard fork based consensus mechanism, far from being dangerous, is actually the solution to centralised control over consensus.

Script versioning is essentially about changing this consensus mechanism so that any change can be made without any consensus. Giving this control to anyone, even satoshi himself, entirely undermines the whole idea of bitcoin. *Decentralised* something something.

[bold]Script versioning[/bold]
Changes to Bitcoin’s script allow for both improved security and improved functionality. However, the design of script only allows backwards-compatible (soft-forking) changes to be implemented by replacing one of the ten extra OP_NOP opcodes with a new opcode that can conditionally fail the script, but which otherwise does nothing. This is sufficient for many changes – such as introducing a new signature method or a feature like OP_CLTV, but it is both slightly hacky (for example, OP_CLTV usually has to be accompanied by an OP_DROP) and cannot be used to enable even features as simple as joining two strings.

Segwit resolves this by including a version number for scripts, so that additional opcodes that would have required a hard-fork to be used in non-segwit transactions can instead be supported by simply increasing the script version.

It doesn't matter where you stand on the blocksize debate, which dev team you support, or any of the myriad disagreements. As Gregory Maxwell himself states:

"Anyone who /understood/ it would [shut down bitcoin], if somehow control of it were turned over to them."
legendary
Activity: 1065
Merit: 1077
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 

Which solutions are you referring to here?
legendary
Activity: 1065
Merit: 1077
[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?
Block size is easy to change. There's an arguably-popular client (Bitcoin Classic) that solves that problem today. To help scaling you need to invent tech to make running a full node easier, such as thin-blocks or IBLT. Shameless plug: I recently produced a video on Xtreme Thin Blocks.

That may be true, but you didn't answer the question I asked (See above).  I don't think segwit is being proposed as the solution to scaling...  I don't think it was really meant to be a scaling solution at all, really - the increased transaction capacity is just a side-effect, right?
newbie
Activity: 25
Merit: 0
[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?
Block size is easy to change. There's an arguably-popular client (Bitcoin Classic) that solves that problem today. To help scaling you need to invent tech to make running a full node easier, such as thin-blocks or IBLT. Shameless plug: I recently produced a video on Xtreme Thin Blocks.
legendary
Activity: 2128
Merit: 1073
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 
legendary
Activity: 1176
Merit: 1134

I did not make this a political thing.


I didn't think you did.  One of us replying messed up the quoting.  I know you are not the one who 'went there'.

ah, the crosspost.

I am just so confused how being a softfork makes fundamentally changing (breaking) things ok
legendary
Activity: 1065
Merit: 1077

I did not make this a political thing.

segwit is marketed as a way to enable scaling, when it is no such thing.


I didn't think you did.  One of us replying messed up the quoting.  I know you are not the one who 'went there'.

[EDIT]How does it not help scaling, if it increases the number of transactions that can be included in each block?
legendary
Activity: 1176
Merit: 1134
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

Yeah, I agree with that.  I was really interested in reading this thread 'til that comment made it political.
I did not make this a political thing.
segwit is marketed as a way to enable scaling, when it is no such thing.

my analysis so far is that it creates a much more complicated error prone system with potential attack vectors that is not peer reviewed that reduces the ability to scale. Maybe my problem is that I am just not smart enough to understand it well enough to appreciate it?

but in some weeks it will be softforked, so its ok, there is no need to worry about it.

so if the bitcoin supply is increased to 1 billion with a softfork, that's ok?

All I see is that segwit tx requires more work, more space, more confusion, but we do end up where there are tx in the blockchain that need to be trusted. bitcoin becomes partly a trusted ledger, but ripple is doing fine, so why not

legendary
Activity: 1065
Merit: 1077
I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?
Submitting pull request without first discussing the viability of proposed "pull" is only for terminally naïve.

Normal programmers do design first then code later, especially on a large financial project.


My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.

legendary
Activity: 2128
Merit: 1073
I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?
Submitting pull request without first discussing the viability of proposed "pull" is only for terminally naïve.

Normal programmers do design first then code later, especially on a large financial project.
legendary
Activity: 1065
Merit: 1077
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

Yeah, I agree with that.  I was really interested in reading this thread 'til that comment made it political.
newbie
Activity: 25
Merit: 0
No need to attack with sarcasm.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
Oh, my bad, I should have been more clear. It's directed at statements like this:
it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.
legendary
Activity: 1065
Merit: 1077
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?

I don't claim to know the answer to that question, but your reply begs the question:  Have you submitted a pull request with code that fixes these problems that you see as 'not "hard" by themselves'?

legendary
Activity: 2128
Merit: 1073
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
legendary
Activity: 1065
Merit: 1077
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.

If you are addressing that to me, I assure you my reply was not meant to be sarcastic at all.  Not sure how anyone could take it that way.
legendary
Activity: 1260
Merit: 1019
Am I misunderstanding the concern here?
The problems are
1) SegWit does not exist
2) Nobody knows how it works
3) Nobody needs it

There is only one goal for everyone: double his fiat money with cryptocurrency.
SegWit does not solve this problem. But the developers are trying to convince you in it.
 
newbie
Activity: 25
Merit: 0
Hold up. I'd like to hear from Wuille (one of the creators of segwit) about the size difference between a standard 2-input, 2-output transaction and its equivalent using segwit, for a fully-validating node. No need to attack with sarcasm.

BTW, I am also curious if the O(n^2) sigops issue can be solved in a much more simple way.
legendary
Activity: 1065
Merit: 1077
It is possible that there is a fundamental misunderstanding here.

I don't think anyone ever claimed that segwit was a way to expand capacity in a more (or even equally) efficient way than simply increasing the block size.

The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue) while ALSO allowing more transactions per block without requiring a hard fork for the block size.  The amount of data in the blockchain for fully-validating nodes will definitely increase, just as it would if there were a 2MB block-size hard-fork.

Am I misunderstanding the concern here?
legendary
Activity: 1176
Merit: 1134
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.

I would like to see an answer to this too, it seems everyone is avoiding this question. Fully validating nodes are very important for those of us that want to verify the blockchain ourselves and are required for bootstrapping new nodes.


01000000000102fff7f7881a8099afa6940d42d1e7f6362bec38171ea3edf433541db4e4ad969f0 0000000494830450221008b9d1dc26ba6a9cb62127b02742fa9d754cd3bebf337f7a55d114c8e5c dd30be022040529b194ba3f9281a99f2b1c0a19c0489bc22ede944ccf4ecbab4cc618ef3ed01eef fffffef51e1b804cc89d182d279655c3aa89e815b1b309fe287d9b2b55d57b90ec68a0100000000 ffffffff02202cb206000000001976a9148280b37df378db99f66f85c95a783a76ac7a6d5988ac9 093510d000000001976a9143bde42dbee7e4dbe6a21b2d50ce2f0167faa815988ac000247304402 203609e17b84f6a7d30c80bfa610b5b4542f32a8a0d5447a12fb1366d7f01cc44a0220573a954c4 518331561406f90300e8f3358f51928d43c212a8caed02de67eebee0121025476c2e83188368da1 ff3e292e7acafcdb3566bb0ad253f62fc70f07aeee635711000000

the above is from https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki
it is a 2 input 2 output tx in the witness space.

In addition to the above, the much smaller anyonecan spend tx is needed too. I think it will be about 100 bytes?

so we have a combined space of around 800 bytes against the 1000 bytes the usual 2 input/2 output tx occupies. Or was that 400 bytes that the 2input/2output tx takes?

I was told that all nodes are expected to be pruning nodes anyway, so you dont have to worry about any full node requirements. They will make sure all the archive copies will forever be kept safe and not tampered with. you can trust them. it is better for bitcoin to require trust

Isnt it nice to have all the hard choices made for you. We can trust in the math done by the central planners. Dont worry, be happy.

James
legendary
Activity: 1176
Merit: 1134
Whoa, whoa, wait...

From the point of view of old clients, segwit adds one coinbase output that contains the root hash of the Merkle tree that commits to the witness transaction ids. It uses 47 extra bytes per block, so technically, yes, it "wastes precious blockchain space".

So, 47 bytes per block. That's not too unreasonable. But...

This then gets us to my question that is not being answered. On average, how many bytes in the blockchain will be needed for a standard payment sent via segwit?

Is this ever less than it would be now?
Is this ever the same as it is now?
Is this usually about 50 bytes more per tx?

50 bytes per transaction for fully-validating nodes? This needs to be answered.
It is answered in the BIP:

Quote
Transaction ID

A new data structure, witness, is defined. Each transaction will have 2 IDs.

Definition of txid remains unchanged: the double SHA256 of the traditional serialization format:

  [nVersion][txins][txouts][nLockTime]
  
A new wtxid is defined: the double SHA256 of the new serialization with witness data:

  [nVersion][marker][flag][txins][txouts][witness][nLockTime]

from the BIP...

the wtxid is based on all of the original, plus marker (1 byte?) flag (1 byte) and witness, which appears to be:

 1-byte - OP_RETURN (0x6a)
   1-byte - Push the following 36 bytes (0x24)
   4-byte - Commitment header (0xaa21a9ed)
  32-byte - Commitment hash: Double-SHA256(witness root hash|witness nonce)

all this seems to be above and beyond what would be needed for the normal, plus the nVersion (4 bytes) and nLockTime (4 bytes) are duplicated. To a simple C programmer like me it sure looks like instead of reducing the net amount as required by anything claiming to save space, it is increasing the size by approx 50 bytes.

Maybe its 32 + 4 + 1 + 1 + 4, so 42 bytes?

I am trying to understand enough to implement this, but unless the original tx is reduced by more than the witness data uses, it will cost more per tx.

But dont worry, I was told that it is likely that 100% of nodes will be pruning nodes in the future and all that matters is the size of the utxo. I still await how any new node can bootstrap if all nodes are pruning nodes...

James
Pages:
Jump to: