Pages:
Author

Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF - page 9. (Read 21415 times)

legendary
Activity: 1176
Merit: 1134
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?
I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James
legendary
Activity: 1176
Merit: 1134
luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.
luke-jr: https://www.reddit.com/r/Bitcoin/comments/4amg1f/iguana_bitcoin_full_node_developer_jl777_argues/d12bcz7

plz note i didnt start any of the reddit threads, they seem to spontaneously start

OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
legendary
Activity: 1260
Merit: 1116
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?

I'm so glad you're here! Can you put this in words the r/btc crowd could relate to? Me included.  Undecided

Quote from: Gmax
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.
staff
Activity: 3458
Merit: 6793
Just writing some code
luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Where did luke-jr tell you this? Did he explain why? I don't understand the 1 byte per vin part and would like to see the explanation for it.

What gmaxwell is saying is that segwit allows for future upgrades. One of those future upgrades could be an upgrade to a different signature scheme which does have the 30% reduction.
legendary
Activity: 1638
Merit: 1001
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

Techno-eunuchs in Classic (Peter R, HashFastDefendantDoc, many others) are hypmotized by jl777's tecnobabble, and are now forced to type with one hand while they rub themselves up a genie with the other.

Classic/jl777 or GOP/Trump - which is more entertaining/sad/doomed?





legendary
Activity: 1176
Merit: 1134
N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
I corrected my mistaken estimates and I made it clear I didnt know the exact overheads. I did after all just start looking into segwit yesterday. Unlike you, I do make mistakes, but when I understand my mistake, I admit it. Maybe you can understand the limitations of mortals who are prone to make errors.

Last I was told, the vinscript that would otherwise be in the normal 1MB blockchain needs to go into the witness area. Is that not correct? If it goes from the 1MB space to the witness space, how is that 30% smaller? (I am talking about permanent storage for full relaying/verifying nodes)

I only responded to knightdk's questions, should I have ignored his direct question?

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers

James
staff
Activity: 4284
Merit: 8808
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. Wink
Knightdk will tell you to defer to me if there is a conflict on such things.

But here there isn't really, I think-- we're answering different statements. I was answering "The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature".

Knightdk is responding about verifying lose transactions, there is no "verify the transaction ID", because no ID is even sent. You have nothing to verify against. All you can do is compute the ID.

I was referring to processing blocks. Generally first step of validating a block, after connecting it to a chain, is checking the proof of work. The second step is hashing the transactions in the block to verify that the block hash is consistent with the data you received. If it is not, the information is discarded before performing further processing. Unlike a loose transaction, you have a block header, and can actually validate against something.

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

Last year I tried proposing an utterly technically simple hard fork to fix the time-warp vulnerability and provide extranonce in the block header using the prev-hash bits that are currently always forced to zero (often requested by miners and ASIC makers-- and important for avoiding hardcoding block logic in asics) and it was _vigorously_ opposed by Mike Hearn and Gavin Andresen-- because it would have required that smartphone wallets upgrade to fix their header checks and difficulty calculation.  ... and that was for something that would be just a well contained four of five lines of code changed.

I hope that that change eventually happens; but given that it was attacked so aggressively by the two biggest advocates of "hard forks are no big deal", I can't imagine a radical backwards incompatible change to the transaction format happening; especially when the alternative is so easy and good that I'd prefer to use it for increased similarity even in an explicitly incompatible system.

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes.  In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).

To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.

I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.

In the mean time: Shame on you, and shame on you for having no shame.
staff
Activity: 3458
Merit: 6793
Just writing some code
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.
I think it might actually be 33 bytes per vin because of the implementation being used which does not introduce the new address type. This is so that the p2sh script will still verify true to old nodes. It is a 0 byte followed by 32 byte hash of the witness.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.
And I don't think that anybody has ever said that it would reduce the space needed to store it. If you are believing everything you read on the internet, you need a reality check. When you read these things, make sure that they are actually backed up by reputable sources e.g. the technical papers.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.
Could you cite the article(s) which did that? If it was something on bitcoin.org or bitcoincore.org then that could be fixed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.
Sure. Any other questions about implementation?

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?
Does it portray segwit that extremely positive? I read it and it didn't seem that way to me.
legendary
Activity: 1260
Merit: 1116
I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for
. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

How come? Huh
legendary
Activity: 1176
Merit: 1134
N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.
I was told the extra space needed was 2 bytes per segwit tx plus 1 byte per vin, though maybe the 1 byte per vin can be reduced to 1 bit. Not sure how that is possible without new script opcodes, so maybe that is a possibility in the fullness of time sort of thing.

Regardless, the total space needed is more for segwit tx than normal tx, this is confirmed by wuille, lukejr and gmaxwell.

now I never said segwit wasnt impressive tech as that is quite a small overhead. my point is that segwit does not reduce the permanent space needed and if you feel that the HDD space needed to store the blockchain (or the data need to be shared between full nodes) is a factor that is important to scalability, then segwit does not help scalability regarding those two factors.

I do not speak about any other factors, only the permanent space used. Originally I was told that segwit did everything, including allow improved scalability and what confused me was that it was presented in a way that led me (and many others) to believe that segwit reduced the permanent storage needed.

now this is clarified that segwit does not reduce the space needed and that segwit softfork will force any node that wants to be able to validate setwit tx to also upgrade to segwit, then I think the rest is about implementation details.

And maybe someone can clarify the text on the bitcoincore.org site that presents segwit as curing cancer and world hunger?

James
staff
Activity: 3458
Merit: 6793
Just writing some code
N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility
I believe I have forgotten to address this. Can you please explain how you are getting this?

AFAIK the txids aren't in any structure used by Bitcoin except in the inventories. Those might be stored, depends on the implementation. However, when it comes to the wtxids, there is absolutely no reason to store them. Their sole purpose is to simply have a hash of all of the data in a segwit transaction and have that be applied to the witness root hash in the coinbase transaction. There is no need to store the wtxids since nothing ever references them.

Where are you getting numvins from?

Anyways, your formula is wrong if you assume that the regular txid is currently being stored. Rather it should be

N + wtxid + numvins > N

and that is only if you are going to store wtxids which are not necessary to store anyways.
legendary
Activity: 1176
Merit: 1134
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.

N + 2*numtxids + numvins > N

I still claim that is true, not sure how that loses me any credibility

member
Activity: 117
Merit: 10
I asked some of these questions 3 months ago.  Never got a decent answer.

Blockstream wants soft-forked SegWit to fix the malleability problems (that would be needed for the LN, if they ever get it to work), and to force ordinary p2p bitcoin users subsidize the costs of complicated multisig transactions (ditto).  But these reasons do not seem explain the urgency and energy that they are putting on the SegWit soft fork.  Maybe they have other undeclared reasons?  Perhaps they intend to stuff more data into the extension records, which they would not have to justify or explain since, being in the extension part, "ordinary users can ignore it anyway"?

As for SegWit being a soft fork, that is technically true; but a soft fork can do some quite radical changes, like imposing a negative interest (demurrage) tax, or raising the 21 million limit.  One could also raise the block size limit that way.  These tricks would all let old clients work for a while, but eventually everybody will be forced to upgrade to use coins sent by the new verson.

You've come to the right place for answers, professor. Openness is our middle name!

Now that that's all settled: What's Stolfi on about here? The 75% discount?

The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
sr. member
Activity: 467
Merit: 267
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.
RHA
sr. member
Activity: 392
Merit: 250
Also I made the mistake of making sure the transaction hash matches for a transaction. I had assumed that if the transaction hash doesnt match, it is invalid rawbytes. Are you saying that we dont need to verify that the transaction hashes match? As you know verifying signatures is very time consuming compared to verifying txid. So if verifying txid is not available anymore, that would dramatically increase the CPU load for any validating node.
Anymore? It was never done in the first place. Verifying the transaction has always been checking the signatures because the creating and verifying signatures involve the hash of the transaction.
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature

Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature
Every piece of Bitcoin software does this.  It is a little obnoxious that you spend so much time talking about these optimizations you're "adding" which are basic behaviors that _every_ piece of Bitcoin software ever written has always done, as if you're the only person to have thought of them or how they distinguish this hypothetical node software you claim to be writing.                                    

Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers. Wink
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
https://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/

Quote
There are still malleability problems that remain, like Bitcoin selecting which part of the transaction is being signed, like the sighash flags. This remains possible, obviously. That's something that you opt-in to, though. This directly has an effect on scalability for various network payment transaction channels and systems like lightning and others

IMO, segwit is a clean up of the transaction format, but in order to do that without a hard fork, it uses a strange way of twin-block structure, which cause unnecessary complexity. Raised level of complexity typically open many new attack vectors, so far this has not been fully analyzed

And the 75% discount of witness data also change the economy of the blockchain space, so that it specially designed to benefit lightning network and other stuffs

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
legendary
Activity: 2128
Merit: 1073
My point, perhaps poorly expressed, was that if you think these problems are 'not hard', you must have solutions in mind, no?  I'd be interested in hearing your ideas.  I am genuinely interested, not being sarcastic here.
It wasn't only me that had those solutions in mind. In fact they are already included in the "segregated witness" proposal, but without the "segregation" part. The "segregation" just splits the transaction in two parts. In fact one could come up with a deficient "segregated witness" proposal that wouldn't fix the discussed problems. They are orthogonal concepts.
 

Which solutions are you referring to here?

The same we discussed less than an hour ago; 9:20am vs. 10:10am.
The advantage of segwit is that it elegantly fixes a couple of other hard problems (malleability, O(n^2) sigops issue)
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
staff
Activity: 3458
Merit: 6793
Just writing some code
Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.
Yes. If you are following the standardness and validation rules that Bitcoin Core uses, then it should be a non-issue.
legendary
Activity: 1176
Merit: 1134
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James
staff
Activity: 4284
Merit: 8808
I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
A node not verifying signatures in blocks during the initial block download with years of POW on them is not at all equivalent to not verifying signatures _at all_.

I agree it is preferably to verify more-- but we live in the real world, not black and white land; and offering multiple trade-offs is essential to decentralized scalability.   If there are only two choices: Run a thin client, verify _nothing_; or run a maximally costly node and verify EVERYTHING then large amounts of decentralization will be lost because everyone who cannot justify or afford the full cost will have no option but to not participate in running a full node.  This makes it essential to support half steps-- it's better to allow people to choose to save resources and not verify months old data-- which is very likely correct unless the system has failed-- since the alternative is them verifying nothing at all.

What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that? What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors? what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
jl777, I already responded to pretty much this question directly just above. It seems like you are failing to put in any effort to read these things, disrespecting me and everyone else in this thread; it makes it seem like responding to you further is a waste of time. Sad

The segwit transactions are non-standard to old nodes. This means that old nodes/wallets ignore them until they are confirmed-- they don't show them in the wallet, they don't relay them, they don't mine them, so even confusion about unconfirmed transactions is avoided.
If you don't understand the concept of transaction standardness, you can learn about it from a few minutes of reading the Bitcoin developer guide: https://bitcoin.org/en/developer-guide#non-standard-transactions and by searching around a bit.

This is a really good explanation, thanks for taking the time to write it up. My understanding of Bitcoin doesn't come direct from the code (yet!) I have to rely on second hand information. The information you just provided has really deepened my understanding of the purpose of the scripting system over and above "it exists, and it makes the transactions work herp" which probably helps address your final paragraph...
[...]

Indeed it does. I am sincerely sorry for being a bit abrasive there: I've suffered too much exposure to people who aren't willing to reconsider positions-- and I was reading a stronger argument into your post than you intended--, and this isn't your fault.

Quote
I'm trying not to get (too) sucked into the conspiracy theories on either side, I'm only human though so sometimes I do end up with five when adding together two and two.

A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It would be a perfectly reasonable question, if it were the case there was indeed a compromise here.

If segwit were to be a hardfork? What would it be?

Would it change how transaction IDs were computed, like elements alpha did? Doing so is conceptually simpler and might save 20 lines of code in the implementation... But it's undeployable: even as a hardfork-- it would break all software, web wallets, thin wallets, lite wallets, hardware wallets, block explorers-- it would break them completely, along with all presigned nlocktime transactions, all transactions in flight. It would add more than 20 lines of code in having to handle the flag day.  So while that design might be 'cleaner' conceptually the deployment would be so unclean as to be basically inconceivable. Functionally it would be no better, flexibility it would be no better.  No one has proposed doing this.

Would it instead do the same as it does not, but instead put the commitment someplace else in the block rather than as a coinbase transaction OP_RETURN? -- at the top of the hashtree?  This is what Gavin Andresen responded to segwit proposing.  This would be deployable as a lite-client compatible semi-hardfork, like the blocksize increase. Would this be more elegant?

In that case... All that changes changing is the position of the commitment from one location to another. Writing the 32+small extra bytes of data in one place in the block rather than another place. It would not change the implementation except some constants about where it reads from. It would not change storage, it would not change performance. It wouldn't be the most logical and natural way to deploy it (the above undeployable method would be).  Because it would be a hard fork, all nodes would have to upgrade for it at the same time.  So if you're currently on 0.10.2 because you have business related patches against that version which are costly to rebase-- or just because you are prohibited from upgrading without a security audit, you'll be kicked off the network under the hard fork model when you don't upgrade by the flag day. Under the proposed deployment mechanism you can simply ignore it with no cost to you (beyond the general costs of being on an older version) and upgrade whenever it makes sense to do so-- maybe against 0.14 when there finally are some new features that you feel justify your upgrade, rather than paying the upgrade costs multiple times.  One place vs the other doesn't make a meaningful difference in the functionality, though I agree top 'feels' a little more orderly. But again, it doesn't change the functionality, efficiency or performance, it wouldn't make the implementation simpler at all. And there there is other data that would make more sense to move to the top (e.g. stxo/utxo commitments) which haven't been designed yet, so if segwit was moved to the top now that commitment at the top would later need to be redesigned for these other things in any case.  It's not clear that even greenfield that this would be more elegant than the proposal, and the deployment-- while not impossible for this one-- would be much less elegant and more costly.

So in summary:  the elegance of a feature must be considered holistically. We must think about the feature itself, how it interacts with the future, and-- critically-- the effect of deploying it.  Considered together the segwit deployment proposed is clearly the most elegant approach.  If deployment were ignored, the elements alpha approach would be slightly preferable, but only slightly -- it makes no practical difference-- but it is so unrealistic to deploy that in Bitcoin today that no one has proposed it. One person did propose changing the commitment location; but the different location to a place that would only be possible in a hardfork but the location makes no functional difference for the feature and would add significant amounts of deployment cost and risk.
Pages:
Jump to: