Pages:
Author

Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF - page 8. (Read 21405 times)

staff
Activity: 3458
Merit: 6793
Just writing some code
jl777, this link: https://bitcoincore.org/en/segwit_wallet_dev/ might be useful to you for help with implementing segwit.
newbie
Activity: 26
Merit: 3
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


Maybe you should read gmaxwell's posts about doing a hard fork to change that calculation. They are a few posts above this in this thread.
Yes. Well said.

It's unsettling how often information that has already been provided is left ignored or unreferenced.

More excusable for the one-off contributors, I realise, but not for those whose apparent strong interest in the issues lead them to post again and again.

Sometimes it's been like wading through treacle, but I'm glad Gregory Maxwell contributed (don't know anything about bitcoin developers, and had never heard of him until today) and what he wrote about the different methods of fixing transaction malleabilty, and their implications, particularly made an impression.

I'd recommend reading his posts in full. Advisory notice - he does lose his patiance occassionally! And for balance, read those he references and those who reference him. And if that prevents just one unecessary post, I'll have done my…
newbie
Activity: 25
Merit: 0
(I am assuming 2mb is more easily coded than segwit
You're right about that. It's so much easier, that it's already been finished for some time now, on the second-most-popular bitcoin client. See http://bitcoinclassic.com
hero member
Activity: 812
Merit: 1001
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?



Yeah, both hackish (although possibly beautiful code) and the economic model, if I understand that correctly.

I don't think segwit could ever achieve HF consensus, my opinion. However if a winning hard fork was achieved, I would respect that.
A soft fork is not right here, and could well be considered an attack.

Why not 2mb first, which is on every partisan roadmap. Then segwit maybe. maybe not.
(I am assuming 2mb is more easily coded than segwit, and not as complicated as segwit as was stated earlier. Although the ease of coding is only a small part of the reason segwit should not be introduced yet. certainly not introduced by core. a SF attack on nodes.)
sr. member
Activity: 409
Merit: 286
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc

Funny, r/btc gave him a symbol as president of blockstream.

Not funny how he plays with words.

He is asked

Quote
Next you'll claim "Classic isn't doing anything to combat UTXO bloat but Blockstream is!"

and he answers

Quote
well Bitcoin developers are yes, via the mechanism I described. Classic isnt doing that ...

Just a notice, slightly offtopic --
staff
Activity: 3458
Merit: 6793
Just writing some code
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?


Maybe you should read gmaxwell's posts about doing a hard fork to change that calculation. They are a few posts above this in this thread.
donator
Activity: 2772
Merit: 1019
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

(Thanks for answering this one question about malleability fix I had. So it can simply be done by omitting sigs from the txid hash input, cool. If not, please let me know)

It seems to me many people have a problem with segwit because of the "hackish" softfork and/or because of the change of the economic model (2 classes of blockspace).

If we did the points listed by JorgeStolfi above as a hardfork, would that be an option for the proponents of segwit? Seems to me such a hardfork could gain wide consensus, maybe wide enough to be considered safe by everyone? It would certainly appeal to the people who just want a simple blocksize increase and it should (I don't know, though) also satisfy the people who want segwit now.

What would be missing compared to segwit? fraud proofs? change of economic model?

legendary
Activity: 1260
Merit: 1008
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.

according to Adam Back SegWit discount applied to signature data will fix an incentive bug in bitcoin, see:

https://www.reddit.com/r/btc/comments/4aka3f/over_3000_classic_nodes/d11atxc
staff
Activity: 3458
Merit: 6793
Just writing some code
Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)
No, they were against hard forking for changing the way that a txid was calculated.
legendary
Activity: 1260
Merit: 1116
Are Core developers against a hard frok because it will somehow confiscate time-locked coins? How many people aside from Blockstream employees have time-locked coins now? (I know this is off-topic, might need a new thread.)
newbie
Activity: 25
Merit: 0
Thanks for your answers, gmax. I understand segwit much better now, in areas like the backward-compatibility in the soft-fork scenario, and the changes to the "base" block.
hero member
Activity: 910
Merit: 1003
A strong malleability fix _requires_ segregation of signatures.

No, none of the alleged benefits of SegWit requires segregation of signatures:

* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  

* GREATER bandwidth savings could be obtained by providing API/RPC calls that simple clients can use to fetch the data that they need from full nodes, sans the signatures etc.  This enhancement would not even be a soft fork, and would let simple clients save bandwidth even when fetching legacy (non-SegWit) blocks and transactions.  

* Pruning signature data from old transactions can be done the same way.

* Increasing the network capacity can be achieved by increasing the block size limit, of course.

Quote
It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits

Biggest issue of this week, perhaps?  

Surely you know that the non-mining relay nodes invalidate the few security guarantees that the protocol can offer. Simple clients should not connect to them, but to miners (or relay nodes that are know to be run by miners). It makes no sense to twist the protocol inside out in order to to meet CONJECTURAL needs of those nodes.

The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block.  That is the cost that the transaction fees have to cover.  The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction.  But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.
legendary
Activity: 1260
Merit: 1116

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development

I didn't know anything about anybody losing time-locked coins in a hard frok. That's not cool. Powerfully not cool!  Angry
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

Quote
Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped.

That isn't like a soft fork, soft forks don't kick anyone out of the network. And you seem to have missed what I said, because of nlocked transactions changing the transaction format would effectively confiscate some people's Bitcoins.


The world will not collapse because of a bitcoin hard fork, and since it has been advertised as an experiment, everyone knows it can have many disruptions, they all play with risk capitals and will tighten their security belt if well-informed. By successfully doing a hard fork, you cleared the way to many difficult changes in future. You can't spell a new soft fork trick every time when you want a backward-incompatible change. If you have to do a hard fork anyway in future, the earlier the better

If you are aiming for million dollars per bitcoin, it is still very early stage of the development
staff
Activity: 4242
Merit: 8672
This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains
You're getting caught up on terms, thinking that all hard forks are the same. They aren't.  Replacing the entire Bitcoin system with Ethereum would, complete with the infinite inflation schedule of ethereum would just be a hardfork. ... but uhhh.. it's not the same thing as, say, increasing the Bitcoin Blocksize, which is not the same as allowing coinbase txn to spend coinbase outputs...

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m
Your usage of the word full node is inconsistent with the Bitcoin communities since something like 2010 at least. A pruned node is a full node. You can invent new words if you like, but keep in mind the purpose of words is to communicate, and so when you make up new meanings just to argue that you're right, you are just wasting time.

You claim to be concerned with validating, but I do not see you complaining that classic has functionality so that miners will skip validation: https://www.reddit.com/r/Bitcoin/comments/4apl97/gavins_head_first_mining_thoughts/

(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

luke-jr told me it takes 2 bytes per tx and 1 byte per vin extra using segwit as opposed to a 2MB hardfork. I thought you also confirmed this. Now you are saying that using segwit reduces the total permanent space used by 30%, if that is really the case then I will change my view.

please explain to me how lukejr is wrong when he says it takes 2 bytes per tx and 1 byte per vin. i will update the title to match my understanding, without shame when I see my mistake. Imagine I am like rainman. I just care about the numbers
Luke told you what the Bitcoin Core segwitness implementation stores. For ease of implementation it stores the flags that way. Any implementation could do something more efficient to save the tiny amount of additional space there, Core probably won't bother-- not worth the engineering effort because it's a tiny amount.

Part of what segwitness does is facilitate signature system upgrades. One of the proposed upgrades now saves an average of 30% on current usage patterns-- I linked it in an earlier response. It would save more if users did whole block coinjoins. The required infrastructure to do that is exactly the same as coinjoin (because it is a coinjoin), with a two round trip signature-- but the asymptotic gain is only a bit over 41%.  It'll be nice for coinjoins to have lower marginal fees than non-coinjoins; but given the modest improvement possible over current usage, it isn't particularly important to have whole block joins with that scheme; existing usage gets most of the gains.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

In fact, if you do it in a hard fork, you can redesign the whole transaction format at will, no need to do so many different hacks everywhere to make old nodes unaware of the change (these nodes can work against upgraded nodes in certain cases, especially when some of the upgraded hashing power do a roll back)
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.


This is a networked society, I don't think a hard fork is that difficult as you said. Ethereum just had one and no one complains

Just like a soft fork, you have a long period to inform all the users to upgrade, those who don't care, their software will just not be able to talk to the network and the transactions will be dropped. Anyone can make a hard fork right away, but if major exchanges, major service providers/merchants are not accepting his coins, there is no point of that minority coin

When a large bank upgrading their system, all the users of that bank can not access the banking service for at least hours or whole night/weekend, no one complains. And sometimes when they have an incident, that could happen during middle of the day and suddenly all the payment can not be done in the whole country, still no one cares, only a piece of news appear on the newspaper

Of course banks can always reverse transaction so it's a bit different than bitcoin. However, bitcoin is use at your own risk, no one will compensate anyone's bitcoin loss due to incompetent devs or forks, so it is the user's responsibility to keep himself updated with the latest change in bitcoin

legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
Classic cargo cult now in bed with jl777, treating him like Jim Jones?  Amazing.  

jl777 views Classic as a headless steamroller with an empty driver's seat he can fill - accruing all glory and power.

I did ask for an iguana childboard here, but was totally ignored on that, not even a rejection. Maybe if that wasnt just ignored, I wouldnt be so active elsewhere. bitco.in gave me a child board the next day, so I am more active there. it is as simple as that.

I do not agree with classic's position against RBF, that does not make sense to me. I still have not heard any rational explanation about how RBF breaks zeroconf. zeroconf cant work when the blocks are full, as when the blocks are full you cant know when a tx in the mempool is likely to confirm. If anything, defining the RBF behavior allows a much better statistical model to predict when an unconfirmed tx will confirm.

Convince me with the math, or you can call me names and I stay unconvinced.

With RBF, I came here, asked some questions, got reasonable answers and made my analysis. I dont like the changing of sequenceid into the RBF field, but it isnt the horrible devils' spawn that it is made out to be. However, the people there are much better behaved and nobody trolled me or insinuated that I dont understand bitcoin at all.

James

We can't make a new childboard for every project you think of.  You start like 4 new things every month, and none of them are ever completed.  If you demonstrate the capacity to follow through on things you begin, perhaps your latest weekly brain fart vaporware might be taken seriously.

Glad you saw through Classic's FUD about RBF.  It doesn't make Classic look good to reject such an obviously beneficial feature, especially for the sake of preserving their false hope about zero-conf tx viability.

The problem in this thread is you contradicting yourself:
Quote
jl777: 'I'm just a simple C programmer' (everybody take a shot, per alt sub rules!  Cheesy)
jl777: 'I'm just here to ask questions'
jl777: 'I just heard about SEGWIT and I'm here to fix it!'
jl777: 'I can't be bothered to read the freaking SEGWIT manual (BIP docs) but will still post my melodramatic FUDDY conclusions about "wasting precious blockchain space"'

Do you see the problem ^there?

IMO, it looks like Gavin helped hype Iguana with the name-drop in order to introduce you as Classic's latest FUDster-In-Chief, following in the ignoble footsteps of Hearn's and Toomin's failures.

Your usual 'baffle them with techno-babble BS' strategy may work in the altcoin space, but there are much higher standards in Bitcoin.   Wink
legendary
Activity: 1176
Merit: 1134
OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".
I would think that to implement a blockwide aggregated signature, would at the least require a three step process:

1. block is mined to determine the tx that are in it
2. the txids of this protoblock would need to be broadcast
3. nodes that are running and part of the protoblock txid would need to sign and return to miner(s)?
4. miner prunes out all the signatures that are aggregated and publishes optimized block

Not sure if the libsecp256k1-zkp lib's schnorr routines are sufficient for this and clearly it cant be done with all sigs, and of course details about timing and protocol for the above have plenty to be defined. like when is the mining reward earned, etc. so this is just a fantasy protocol for now

I am not saying the above is possible, just that the above is the minimum back and forth that would be needed and it has some privacy issues, so some privacy enhancements are probably needed too. A bitmap of the aggregate signers would probably be needed, but that can be run length encoded to take up relatively small amount of space
member
Activity: 117
Merit: 10
The discount is the question you won't get a good answer for. Fundamental economics of Bitcoin, price per byte, changed drastically, with a soft fork.

What? It's an explicit goal. Transaction "size" in a particular serialization (which isn't necessarily used for transmission or storage) does not well reflect the costs of a transaction to the system. This has created a misalignment of incentives which has been previously misused (e.g. a miner creating blocks which expand the UTXO set size by almost a megabyte twiddling around with dust-spam (known private keys)).  

“What?” Yes, it is an explicit goal, an under-publicized one. Glad to hear you acknowledge that you are realigning, in your view, the misaligned incentives of the current system, via a soft fork without a full node referendum.

At the end of the day signatures are transmitted at most once to a node and can be pruned. But data in the UTXO set must be in perpetual online storage. It's size sets a hard lower bound on the amount of resources to run a node. The fact that the size limit doesn't reflect the true cost has been a long term concern, and it's one of the biggest issues raised with respect to blocksize limits (even acknowledged by strong proponents of blocksize increase: e.g.  http://gavinandresen.ninja/utxo-uhoh (ignore anything in it about storing the UTXO set in ram, no version of Bitcoin Core has ever done that; that was just some confusion on the part of the author)). Prior problems with UTXO bloating attacks forced the introduction of the "dust limit" standardness rule, which is an ugly hack to reduce the bleeding from this misalignment of incentives.

I’m aware that Core is focused on encouraging a gradation of nodes on the network. To me, a full node means a full, archival, fully validating node, and that’s what I’m concerned with. You are applying economic favoritism in order to achieve benefits for these new partial full nodes, which is ok, as long as everyone is aware of it. With a handful of miners activating it, I’m not sure you have the full consent of the network to pursue this goal. With a soft fork, full consent is not required or even relevant.

In Montreal scaling Bitcoin fixing this costing imbalance was _the_ ray of light that got lots of people thinking that some agreement to a capacity bump could be had: if capacity could be increased while _derisking_ UTXO impact, or at least making it no worse-- then many of the concerns related to capacity increases would be satisfied.  So I guess it's no shock to see avowed long time Bitcoin attackers like jstolfi particularly picking on this aspect of a fix as a measure to try to undermine the ecosystem.

So… changing these incentives was _the_ ray of light that allowed “lots of people” (assuming blockstream here) that a capacity increase could be had, fascinating. Before your email became the core roadmap, and before the conclusion of the HK conference, almost everyone thought that we would be hard forking at least some block size increase. Interesting to hear that perspective was wrong all along.


One of the challenges coming out of Montreal was that it wasn't clear how to decide on how the corrected costing should work. The "perfect" figures depend on the relative costs of storage, bandwidth, cpu, initial sync delays, etc.. which differ from party to party and over time-- though the current size counting is clearly poor across the board. Segwit addressed that, open parameter because optimizing it's capacity required a discount which achieved a dual effect of also fixing the misaligned costing.

This is all just you playing economic central planner, and the 1MB anti DOS limit from 2010 has become your most valued control lever, kudos.

The claims that the discounts have something to do with lightning and blockstream have no substance at all.
(1) Lightning predates Segwit significantly.

Not surprising, segwit was designed with the "side" benefit of making sig heavy settlement tx cheaper, and a main benefit of fixing malleability which LN requires.

(2) Lightning HTLC transactions have tiny signatures, and benefit less than many transaction styles (in other words the recosting should slightly increase their relative costs), though no one should care because channel closures are relatively rare. Transactions that do large multisigs would benefit more, because the current size model radically over-costs them relative to their total cost to Bitcoin nodes.

Waves hands.

(3) Blockstream has no plans to make any money from running Lightning in Bitcoin in any case;  we started funding some work work on Lightning because we believed it was long term important for Bitcoin and Mike Hearn criticized us for not funding it if we thought it important, because one of our engineers _really_ wanted to work on it himself, and because we were able to work out a business case for using it to make sidechains scalable too.

I will be paying attention as to whether this statement remains true. You got your jabs in at both Gavin and Mike, so, kudos again.
staff
Activity: 3458
Merit: 6793
Just writing some code
OK, so maybe the fact that I am trying to analyze what segwit softfork in the upcoming weeks will do, that explains my not understanding that future upgrades with a new signature scheme are part of the analysis... Would these changes require a hardfork, or the usual softfork can change the signature scheme? It is kind of hard to analyze something based on unspecified future upgrades with a different signature scheme.
Well I think those changes could be soft forked in because it changes the script version number, which I think would only affect the address type. I could be wrong though.

maybe there can be just a single aggregated signature for all the tx in a block? I have no idea if that is possible, but if it is, then that could be added to the coinbase and then we wont need any witness data at all. Did I get that right?
I am fairly certain that this isn't possible since it would require the private keys that can spend the inputs of all of the transactions to sign it. However, I could be wrong as I am not well versed in many parts of cryptography. There maybe is an algorithm which could combine all of the signatures, I don't know. You'll have to ask gmaxwell, he is the "chief cryptographer".
Pages:
Jump to: