Pages:
Author

Topic: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF - page 5. (Read 21405 times)

newbie
Activity: 25
Merit: 0
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).
There isn't. 100kb is a huge transaction (100 times bigger than a "normal" large transaction). IMO, that's a perfectly acceptable threshold. If larger is needed, you can always create a second transaction.

The "one every 100 blocks" exception really isn't needed here. It's more cool than useful.
legendary
Activity: 1176
Merit: 1134
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?

The actual fork under discussion has this property.  Restricting all transactions to 1MB would prevent the O(N2) part of the hashing problem.

Even better would be to restrict transactions to 100kB.  As I understand it, core already considers transactions above 100kB as non-standard.

The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N2)).  The problem with doing that is locked transactions.  There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).

A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here.  Locked transactions can still be spent, but only in every 100th block.  Mostly likely nobody has 100kB+ locked transactions anyway.
if >100kb is nonstandard, then odds are very very high that there are no such pending tx
and moving forward, CLTV can be used

cool idea to have an anything goes block every 100, it probably isnt an issue but since it is impossible to know for sure, probably a good idea to have something like that, but for something that probably doesnt exist, then 1 in 1000 should be good enough, or just make it nonstandard and as long as any single miner is mining them it will eventually get confirmed.
legendary
Activity: 1232
Merit: 1094
But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?

The actual fork under discussion has this property.  Restricting all transactions to 1MB would prevent the O(N2) part of the hashing problem.

Even better would be to restrict transactions to 100kB.  As I understand it, core already considers transactions above 100kB as non-standard.

The benefit of restricting transactions to 100kB should improve things by a factor of 100 (assuming O(N2)).  The problem with doing that is locked transactions.  There could be a locked 200kB transaction that spends some outputs and where an alternative transaction can no longer be created (private keys lost and/or multisig outputs).

A soft fork which restricted transactions to 100kB unless the height is evenly divisible by 100 would be a reasonable compromise here.  Locked transactions can still be spent, but only in every 100th block.  Mostly likely nobody has 100kB+ locked transactions anyway.
legendary
Activity: 1176
Merit: 1134
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

The spending transaction isn't in the block chain.

You create transaction A and then create the refund transaction B.  B is signed by both parties.  A is submitted to the blockchain.  B has a locktime of 2 years in the future.

A soft fork happens that makes B unspendable for some reason.  Perhaps, it requires signatures signed with the original private keys.  In that case, it is impossible for either party to create the new spending transaction.

This has already happened with the P2SH fork.  If you happened to create a P2SH output, then it would be unspendable.  On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 

The key point is that a (chain of) timelocked transactions that are spendable now, should also be spendable in the future.
I see, this was before CLTV, where future locktime tx couldnt be confirmed.

Theoretically any unspent multisig output could be in this state, and any p2sh output could also have this issue.

But why does the fork have to make the existing outputs unspendable? I know it is possible to make any sort of fork, but who is proposing anything that would make these locktime tx unspendable?
legendary
Activity: 1176
Merit: 1134
On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 
Sorry, my English is too poor. Who checked what?
Are you sure that these addresses are spendable today?
https://blockchain.info/address/3Dnnf49MfH6yUntqY6SxPactLGP16mhTUq
https://blockchain.info/address/3NukJ6fYZJ5Kk8bPjycAnruZkE5Q7UW7i8
From a practical point, if the amounts that are lost are small, then it could be solved via compensation. Practically speaking, it doesnt make sense to me to spend 1000 BTC of costs to make sure .001 BTC is preserved, assuming there are good justifications.

But that's just me
legendary
Activity: 1260
Merit: 1019
On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 
Sorry, my English is too poor. Who checked what?
Are you sure that these addresses are spendable today?
https://blockchain.info/address/3Dnnf49MfH6yUntqY6SxPactLGP16mhTUq
https://blockchain.info/address/3NukJ6fYZJ5Kk8bPjycAnruZkE5Q7UW7i8
legendary
Activity: 1232
Merit: 1094
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

The spending transaction isn't in the block chain.

You create transaction A and then create the refund transaction B.  B is signed by both parties.  A is submitted to the blockchain.  B has a locktime of 2 years in the future.

A soft fork happens that makes B unspendable for some reason.  Perhaps, it requires signatures signed with the original private keys.  In that case, it is impossible for either party to create the new spending transaction.

This has already happened with the P2SH fork.  If you happened to create a P2SH output, then it would be unspendable.  On the plus side, I assume they actually checked that there were no such outputs when the fork was proposed. 

The key point is that a (chain of) timelocked transactions that are spendable now, should also be spendable in the future.
legendary
Activity: 1260
Merit: 1019
...because it only needs the agreement of a [...] majority, who need not inform or convince anyone else
Please let me know if you find a globe with different law.  Grin
legendary
Activity: 2128
Merit: 1073
Similar difficulties exist in handling an old transaction that was created before a soft fork but was broadcast only after it, and became invalid under new rules.  The rules must have changed for a reason, so the transaction cannot simply be included in the blockchain as such.   For example, suppose that the change consisted in imposing a strict limit to the complexity of signatures, to prevent "costly transaction" attacks.  The miners cannot continue to accept old transactions according to old rules, because that would frustrate the goal of the fork. 
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?

hero member
Activity: 910
Merit: 1003
Quote
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

What a load of FUD. How you expect people to take you seriously when you make ridiculous statements like this I will never know..

A hard fork cannot "change anything" that easily, because the proponents must explain the change and convince most miners and most users to upgrade, before the change is activated.

A soft fork, on the other hand, can "change anything" much more easily, because it only needs the agreement of a simple mining majority, who need not inform or convince anyone else beforehand.
legendary
Activity: 1176
Merit: 1000
Quote
No, you can't-- not if you live in a world with other people in it.  The spherical cow "hardforks can change anything" ignores that a hardfork that requires all users shutting down the Bitcoin network, destroying all in flight transactions, and invalidating presigned transactions (thus confiscating some amount of coins) will just not be deployed.

What a load of FUD. How you expect people to take you seriously when you make ridiculous statements like this I will never know..
hero member
Activity: 910
Merit: 1003
I remember seeing someone post a softfork that allowed to issue more than 21 million bitcoins, so clearly any sort of thing is possible via softfork/hardfork.

I believe the idea was posted first on reddit by /u/seweso . Here is my version of it.

Quote
Since a hardfork attack (or softfork) can always be attempted, it seems the only defense against something that is wrong is for there to be an outcry about it.

But what would the outcry achieve?

Quote
P.S. We can avoid the extreme N*N sig tx attack without breaking any existing tx by setting the limit to allow 1MB tx, but that still avoids problems from larger blocks

1 MB transactions already can take a long time to validate. 
hero member
Activity: 910
Merit: 1003
A hard fork which makes the refund transaction invalid effectively steals that output. 

You mean a soft fork.

A hard fork should not cause that.  It should only make invalid transactions valid, not the other way around.

However, a hard fork could enable a new type of "lock breaking" transaction that allows the locked coins to be spent before the expiration date.  That would invalidate the refund transaction, which would be rejected as a double spend.

I don't know whether such a change would still qualify as a hard fork, though. 
legendary
Activity: 1176
Merit: 1134
Anyway, other possible soft-fork changes that could prevent confirmation of a currently valid transaction include reduction of the block size limit (as Luke has been demanding), imposing a minimum output value (an antispam measure proposed by Charlie Lee), limtiing the number of inputs and outputs, extending the wait period for spending coinbase UTXOs, and many more
I remember seeing someone post a softfork that allowed to issue more than 21 million bitcoins, so clearly any sort of thing is possible via softfork/hardfork.

Since a hardfork attack (or softfork) can always be attempted, it seems the only defense against something that is wrong is for there to be an outcry about it.

James

P.S. We can avoid the extreme N*N sig tx attack without breaking any existing tx by setting the limit to allow 1MB tx, but that still avoids problems from larger blocks
legendary
Activity: 1176
Merit: 1134
Consider the standard refund transaction setup.  A transaction with a 2 of 2 output is committed to the block-chain that is spent by a refund transaction.

If the refund transaction has a locktime 2 years into the future, then it cannot be spent for at least two years.

On the one hand, the refund transaction is unconfirmed.  But on the other hand, there is no risk of its input being double spent.  Both parties are safe to assume that the transaction will eventually be included.

A hard fork which makes the refund transaction invalid effectively steals that output.  At the absolute minimum, there should be a notice period, but it is better to just not have that problem in the first place.

If someone has a 1MB transaction that spends a 2 of 2 output but is locked for 5 years, is it fair to say to them that it is no longer spendable?
I am confused. In your example, it is in the blockchain and since you have the ability to spend it, then why would any fork make it so you cant spend it?

If you are saying there are some 1MB tx that has timelocked tx in the future that is already confirmed, I am not sure why that is relevant. Clearly all existing tx that are already confirmed would be grandfathered in.

So the limit on tx size (however it is done) would apply to post fork tx.

Sorry to be slow on this, but I dont see what type of unconfirmed tx we need to make sure it is valid post fork. If it requires to create a new spend that is less than a 1MB tx size, that doesnt lose funds, so I dont see the issue.

hero member
Activity: 910
Merit: 1003
I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

I disagree.

If you create a transaction that spends your own outputs, then it is possible to be sure that that transaction will be included in the blockchain.  You might have to pay extra fees though (assuming some miners have child pays for parent).

A rule change can make the transaction invalid and that is a reason for not making those rule changes.


I insist: you cannot be sure, because a fee hike is not the only change that might prevent confirmation. Especially if the transaction is held for months before being broadcast.

Rule changes are inevitable.  They are likely to be needed to fix bugs and to meet new demands and constraints.  Many rule changes have happened already, and many more are in the pipeline.

As I pointed out, if Antpool, F2Pool, and any third miner decide to impose a soft-fork change, they can do it, and no one can stop them.
 
Curiously, it is soft-fork changes that can prevent confirmation of signed and validated but unconfirmed transactions.  Hard-fork changes (that only make rules more permissive) will not affect them.

CPFP is a mempool management rule only.  If a min fee hike is implemented as a mempool management rule only, or is an individual option of each miner, then one can hope that some miner may also implement CPFP, and then the low-fee transaction will be pulled through.  But there is no way for the client to know whether some miner is doing that, so he cannot put a probability on that.

On the other hand, if the min fee is implemented as a rule change (meaning that miners are prohibited from accepting low-paying transactions) then it seems unlikely that CPFP will be implemented too.  The validity rules must be verifiable "on line", meaning that the validity of a block in the blockchain can only depend on the contents of the blockchain up to and including that block.  In particular, the rules cannot say "a transaction with low fee is valid if there is a transaction further ahead in the blockchain  that pays for it.

Anyway, other possible soft-fork changes that could prevent confirmation of a currently valid transaction include reduction of the block size limit (as Luke has been demanding), imposing a minimum output value (an antispam measure proposed by Charlie Lee), limtiing the number of inputs and outputs, extending the wait period for spending coinbase UTXOs, and many more
legendary
Activity: 1232
Merit: 1094
Support for legacy files and programs in newer relases of an OS is similar to the "clean fork" approach that I described.  namely, the new software is aware of the old sematics and can use it when required.  Any hard fork must have such backwards compatibilty, because it must recognize as valid all blocks and transactions that were confirmed before the fork.

You could just checkpoint the block where the rule change happened and then just include code for the new rules.  The client would still need to be able to read old blocks, but wouldn't need to be able to validate them.

Checkpoints aren't very popular though and takes away from claims that everything is p2p.

Quote
I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

I disagree.

If you create a transaction that spends your own outputs, then it is possible to be sure that that transaction will be included in the blockchain.  You might have to pay extra fees though (assuming some miners have child pays for parent).

A rule change can make the transaction invalid and that is a reason for not making those rule changes.
hero member
Activity: 910
Merit: 1003
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

I don't think it is the same thing at all. 

Support for legacy files and programs in newer relases of an OS is similar to the "clean fork" approach that I described.  namely, the new software is aware of the old sematics and can use it when required.  Any hard fork must have such backwards compatibilty, because it must recognize as valid all blocks and transactions that were confirmed before the fork.

Backwards compatibility in general is feasible as long as there is a feasible mapping of old semantics to the new infrastructure, and there is no technical or other reason to deny the conversion.    However, that sometimes is impossible; e.g. if an old program tries to access hardware functions that are not accessible in newer hardware, or if the mapping would require decrypting and re-encrypting data without access to the keys.

Similar difficulties exist in handling an old transaction that was created before a soft fork but was broadcast only after it, and became invalid under new rules.  The rules must have changed for a reason, so the transaction cannot simply be included in the blockchain as such.   For example, suppose that the change consisted in imposing a strict limit to the complexity of signatures, to prevent "costly transaction" attacks.  The miners cannot continue to accept old transactions according to old rules, because that would frustrate the goal of the fork. 

(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)

maybe I am being a bit simplistic about this, but "unconfirmed" to me means that it hasnt been confirmed. So to require that all unconfirmed transactions must be confirmed contradicts the fundamental meaning of unconfirmed. What is the meaning of the word 'unconfirmed'?

I don't think that anyone is proposing to change the definition.  Transactions that have not been broadcast yet and transactions that are in the queue (mempool) of some nodes or miners, but are not safely buried into the blockchain, are equally unconfirmed. 

I meant to point out that there is no way that a client can make sure that an unconfirmed transaction will ever be confirmed, even if it seems to be valid by current rules.  Everybody agrees on that.?

In fact, there is no way to put a probability value on that event, even with the usual assumptions of well-distributed mining etc.  Everybody still agrees?

But then it follows that clients who hold signed transactions for broadcast at a later date cannot trust that they will be confirmed, even if they seem to be valid at the time of igning.  Everybody OK with this?

Thus, there is no weight in the argument "we cannot do X because it would invalidate all pre-signed transactions that people are holding". 
legendary
Activity: 1232
Merit: 1094
maybe I am being a bit simplistic about this, but "unconfirmed" to me means that it hasnt been confirmed. So to require that all unconfirmed transactions must be confirmed contradicts the fundamental meaning of unconfirmed. What is the meaning of the word 'unconfirmed'?

Consider the standard refund transaction setup.  A transaction with a 2 of 2 output is committed to the block-chain that is spent by a refund transaction.

If the refund transaction has a locktime 2 years into the future, then it cannot be spent for at least two years.

On the one hand, the refund transaction is unconfirmed.  But on the other hand, there is no risk of its input being double spent.  Both parties are safe to assume that the transaction will eventually be included.

A hard fork which makes the refund transaction invalid effectively steals that output.  At the absolute minimum, there should be a notice period, but it is better to just not have that problem in the first place.

There was at least one thread that asked about leaving money to someone for their 18th birthday.  A payment like that could very easily be locked for 10+ years.  I think the conclusion in the thread was that leaving a letter with a lawyer was probably safer.

If someone has a 1MB transaction that spends a 2 of 2 output but is locked for 5 years, is it fair to say to them that it is no longer spendable?

There is probably a reasonable compromise, but it should err on the side of not invalidating locked transactions.

That is why increasing the version number helps.  If someone has a locked transaction that uses a non-defined transaction version number, then I think it is fair enough that their locked transaction ends up not working.  For the time being, only version 1 transactions are safe to use with locktime.

I made a post on the dev list at the end of last year with some suggestions for rules. 

  • Transaction version numbers will be increased, if possible
  • Transactions with unknown/large version numbers are unsafe to use with locktime
  • Reasonable notice is given that the change is being contemplated
  • Non-opt-in changes will only be to protect the integrity of the network

I think if a particular format of transaction has mass use, then it is probably safer for locking than an obscure or very unusual transaction.  A transaction that uses one of the IsStandard forms would be safer than one that is 500kB and has lots of OP_CHECKSIG calls.

The guidelines could say that transactions which put an 'excessive' load on the network are riskier.
legendary
Activity: 1176
Merit: 1134
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

The initial step is already done in form of libconsensus. It is a matter of slightly broadening the libconsensus' interface to allow for full processing of compatibility-mode transactions off the wire and old-style blocks out of the disk archive.

Then it is just a matter of keeping track of the versions of libconsensus.

To my nose this whole "segregated witness as a soft fork" has a strong whiff of the "This program cannot be run in DOS mode" from Redmond, WA. Initially there were paeans written about how great it is that one could start Aldus Pagemaker both by typing PAGEMKR on the C> prompt (to start Windows) and by clicking PageMaker icon in the Program Manager (if you already had Windows started). Only years later the designers admitted this to be one of the worst choices in the history of backward compatibility.


I agree
Pages:
Jump to: