The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.
What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).
“What?” Yes, it
is an explicit goal, an under-publicized one. Glad to hear you acknowledge that you are realigning, in your view, the misaligned incentives of the current system, via a soft fork without a full node referendum.
At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g. http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.
I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m concerned with. You are applying economic favoritism in order to achieve benefits for these new partial full nodes, which is ok, as long as everyone is aware of it. With a handful of miners activating it, I’m not sure you have the full consent of the network to pursue this goal. With a soft fork, full consent is not required or even relevant.
In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied. So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.
So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost
everyone thought that we would be hard forking at least
some block size increase. Interesting to hear that perspective was wrong all along.
One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.
This is all just you playing economic central planner, and the 1MB anti DOS limit from 2010 has become your most valued control lever, kudos.
The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.
Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.
(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.
Waves hands.
(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case; we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.
I
will be paying attention as to whether this statement remains true. You got your jabs in at both Gavin and Mike, so, kudos again.