What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.
A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.
And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to later discover that there were additional forms. If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.
Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".
Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.
Can't you, gmaxwell and knightdk, settle on verifying txid at last?
It's really hard to get info on SegWits here if even such an obvious thing (one would think) gets contradictory answers.
Knightdk will tell you to defer to me if there is a conflict on such things.
But here there isn't really, I think-- we're answering different statements. I was answering "The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature".
Knightdk is responding about verifying lose transactions, there is no "verify the transaction ID", because no ID is even sent. You have nothing to verify against. All you can do is compute the ID.
I was referring to processing blocks. Generally first step of validating a block, after connecting it to a chain, is checking the proof of work. The second step is hashing the transactions in the block to verify that the block hash is consistent with the data you received. If it is not, the information is discarded before performing further processing. Unlike a loose transaction, you have a block header, and can actually validate against something.
In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it. The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.
Last year I tried proposing an utterly technically simple hard fork to fix the time-warp vulnerability and provide extranonce in the block header using the prev-hash bits that are currently always forced to zero (often requested by miners and ASIC makers-- and important for avoiding hardcoding block logic in asics) and it was _vigorously_ opposed by Mike Hearn and Gavin Andresen-- because it would have required that smartphone wallets upgrade to fix their header checks and difficulty calculation. ... and that was for something that would be just a well contained four of five lines of code changed.
I hope that that change eventually happens; but given that it was attacked so aggressively by the two biggest advocates of "hard forks are no big deal", I can't imagine a radical backwards incompatible change to the transaction format happening; especially when the alternative is so easy and good that I'd prefer to use it for increased similarity even in an explicitly incompatible system.
The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).
At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g. http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.
In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied. So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.
One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.
The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case; we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.
N + 2*numtxids + numvins > N
I still claim that is true, not sure how that loses me any credibility
In one post you were claiming 42 bytes per a one in / one out transaction, the other you appeared to be claiming 800 bytes. In any case, even your formula depends on what serialization is used; one could choose one where it was smaller and not bigger. The actual amount of true entropy added is on the order of a couple bits per transaction (are segwit coins being spent or not and what script versions).
To characterize that as "SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY", when the same signaling will allow the use of new signature schemes that reduce the size of transactions on average about _30%_ seems really deceptive, and it makes me sad that you're continuing with this argument even after having your misunderstandings corrected.
I thought you said you were said you were actually going to write the software you keep talking about and speak through results; rather than the continued factually incorrect criticisms you keep making of software and designs which you don't care to spend a minute to learn the first thing about? We're waiting.
In the mean time: Shame on you, and shame on you for having no shame.