Pages:
Author

Topic: BIP 17 - page 4. (Read 9151 times)

hero member
Activity: 868
Merit: 1007
January 25, 2012, 03:57:23 PM
#43
Also, this whole debate makes me wonder whether it wouldn't be worthwhile factoring out the script validation/execution engine to allow that bit to be independently upgraded from the rest of the bitcoin client.  It might also help with unit testing and verifying a critical bit of code to keep it separated from everything else.
hero member
Activity: 868
Merit: 1007
January 25, 2012, 03:52:37 PM
#42
What frightens me is that there is no way back if we make this step and it turns out to be a wrong direction.
Please explain to me how ANY of the proposals (the original OP_EVAL, BIP 16, and BIP 17) are any different in the "what if we change our minds and want to remove support" case?

Removing support for BIP 17 would be harder than removing support for BIP 16, because if the network doesn't support it all BIP 17 transactions can be stolen as soon as they're broadcast.

Imagine there are a bunch of un-redeemed BIP 17 transactions in the block chain and support for BIP 17 goes away.  Every single one of them could be immediately redeemed by anybody.

The situation is better with BIP 16, because of the "half validation" done by old nodes.  Admittedly not a lot better, but it is the "belt and suspenders" nature of BIP 16 that makes me prefer it.
Isn't it still possible for anyone to spend out of a BIP-16 transaction if someone has ever revealed the public key (so, if you ever spend from a BIP16 transaction, you would certainly want to spend out of every transaction funding that address to avoid a chance that someone will spend those other transactions)?  

So, let me see if I can sum up the choices when changing the protocol (regardless of which proposal is adopted):

1) Make things backward compatible in order to avoid a chain split - this creates a risk that coins can be stolen because old nodes/miners aren't performing full validation on the new transactions…to avoid this risk you do what?  wait until 100% of all mining capacity has upgraded before using the new transaction types?

2) Ensure that no nodes (old/new, miner or non miner) allow any transaction that doesn't pass all validation and accept that a chain split could (will) happen - this creates a risk that people who don't upgrade can end up with un-spendable coins in their wallets (a double spend could be executed by creating a transaction accepted in the new chain, but not in the old, then spend those coins again on the old chain).

I'm not sure which is the better approach…just trying to make sense of it.  However, it feels like it's better to ensure coins can't be accidentally spent and that full validation is always performed.  If the super majority of the network is switching, then people will upgrade or patch…that's just part of the deal with bitcoin IMO.  I also question the wisdom of OP_NOP bytecodes.  Seems like it would be smarter to make them cause a script to immediately fail (if that were the case, we wouldn't be having this discussion, we would be talking about a clean implementation and a proper transition of the network).
legendary
Activity: 1652
Merit: 2216
Chief Scientist
January 25, 2012, 03:20:58 PM
#41
What frightens me is that there is no way back if we make this step and it turns out to be a wrong direction.
Please explain to me how ANY of the proposals (the original OP_EVAL, BIP 16, and BIP 17) are any different in the "what if we change our minds and want to remove support" case?

Removing support for BIP 17 would be harder than removing support for BIP 16, because if the network doesn't support it all BIP 17 transactions can be stolen as soon as they're broadcast.

Imagine there are a bunch of un-redeemed BIP 17 transactions in the block chain and support for BIP 17 goes away.  Every single one of them could be immediately redeemed by anybody.

The situation is better with BIP 16, because of the "half validation" done by old nodes.  Admittedly not a lot better, but it is the "belt and suspenders" nature of BIP 16 that makes me prefer it.
hero member
Activity: 496
Merit: 500
January 25, 2012, 02:38:32 PM
#40
IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).
Gavin is correct that the actual function named IsStandard() could be removed or replaced. However, with BIP 16, all implementations are required to check for the specific BIP 16 standard transaction and treat it differently.
+1
I'm still with Luke on this one.
While Gavin admits that he would love to make IsStandard more generic (moving from pre-defined scripts to generic resource-constrained ones)
he still leaves handling of multisig as an eternal "special case" for some reason.

On the other hand this "special case" might just be a first step into another way of doing things, and if there are other "special cases" in the future it wouldn't look that odd. What frightens me is that there is no way back if we make this step and it turns out to be a wrong direction.
With Luke's solution we are not making any steps into unknown, we just implement the feature within the current framework.
This will allow us to have multisig as soon as we want it, while giving us more time to think what kind of steps and in what direction we want to make.
legendary
Activity: 2576
Merit: 1186
January 25, 2012, 12:57:30 PM
#39
IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).
Gavin is correct that the actual function named IsStandard() could be removed or replaced. However, with BIP 16, all implementations are required to check for the specific BIP 16 standard transaction and treat it differently.
hero member
Activity: 868
Merit: 1007
January 25, 2012, 12:14:24 PM
#38
IsStandard() is a permanent part of the protocol with BIP 16.
Can you elaborate why that's the case?  If true, I think it's very bad.  IsStandard() needs to be lifted at some point (probably when there is a suite of tests around each and every opcode that verify that it does the specified thing and leaves all execution context in a valid state).

Edit: by "lifted" I mean to not restrict scripts to a small set of well known ones…but limits on resource utilization by a script is definitely needed.
legendary
Activity: 1652
Merit: 2216
Chief Scientist
January 25, 2012, 12:13:55 PM
#37
IsStandard() is a permanent part of the protocol with BIP 16.

No, it really isn't.

Here's a possible future implementation of IsStandard():

Code:
bool
IsStandard()
{
    return true;
}

I like the idea of a future IsStandard() that allows more transaction types, but only if they're under some (sane) resource limits.
member
Activity: 97
Merit: 10
January 25, 2012, 12:12:09 PM
#36

So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.

For Bitcoin businesses, slow propagation is a customer support headache.  In my experience, customers send their payment and expect prompt acknowledgement that it was received even if they still have to wait for confirmations.  The longer the propagation delay, the more customer support emails I get.  I'm guessing propagation delay will slow adoption of BIP 17 transactions among those needing them most: businesses with large balances.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().

It would be pretty easy to fix this. All you need to do is to modify the networking code (as well as the dnsseeds) to make sure there is connectivity between eveyrone running the new version without needing to pass through old versions in the process. Could someone more experienced at p2p networks than me propose an algorithm?
legendary
Activity: 2576
Merit: 1186
January 25, 2012, 12:04:34 PM
#35
With BIP 17, both transaction outputs and inputs fail the old IsStandard() check, so old clients and miners will refuse to relay or mine both transactions that send coins into a multisignature transaction and transactions that spend multisignature transactions.  BIP 16 scriptSigs look like standard scriptSigs to old clients and miners. The practical effect is as long as less than 100% of the network is upgraded it will take longer for BIP 17 transactions to get confirmed compared to BIP 16 transactions.
Since scriptSigs must always follow scriptPubKey, does this really make a big difference? ie, if people can't send them, they can't receive them anyway.

So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.
BIP 16 has the some problem as BIP 17 when sending to them.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().
IsStandard() is a permanent part of the protocol with BIP 16.
vip
Activity: 447
Merit: 258
January 25, 2012, 11:57:47 AM
#34
With BIP 17, both transaction outputs and inputs fail the old IsStandard() check, so old clients and miners will refuse to relay or mine both transactions that send coins into a multisignature transaction and transactions that spend multisignature transactions.  BIP 16 scriptSigs look like standard scriptSigs to old clients and miners. The practical effect is as long as less than 100% of the network is upgraded it will take longer for BIP 17 transactions to get confirmed compared to BIP 16 transactions.
Since scriptSigs must always follow scriptPubKey, does this really make a big difference? ie, if people can't send them, they can't receive them anyway.

So far, I favor BIP 16 because of this point alone.  Eyeballing the client version distributions suggests that roughly 70% of users are running clients older than 0.5.  We probably can't expect that 70% to upgrade anytime soon.

For Bitcoin businesses, slow propagation is a customer support headache.  In my experience, customers send their payment and expect prompt acknowledgement that it was received even if they still have to wait for confirmations.  The longer the propagation delay, the more customer support emails I get.  I'm guessing propagation delay will slow adoption of BIP 17 transactions among those needing them most: businesses with large balances.

Incidentally, I agree with sentiments expressed elsewhere on this thread that IsStandard() should be replaced with actual resource consumption metrics in the scripting evaluation engine.  It seems fairly straightforward to assign a cost to each opcode and fail any script that hits the resource limits.  That seems both more flexible and more durable than IsStandard().
legendary
Activity: 1652
Merit: 2216
Chief Scientist
January 25, 2012, 11:00:17 AM
#33
Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

That's non-trivial to implement; it seems to me that a conscious decision by the miners/pools to support or not support is less work and safer.

Luke proposed something similar earlier, though; I'm surprised his patches don't implement it.

I like whoever proposed that the string in the coinbase refer to the BIP, in the future that's the way it should be done.


RE: schedules:

Deadlines, as we've just seen, have a way of focusing attention.  OP_EVAL got, essentially, zero review/testing (aside from my own) until a month before the deadline.

It seems to me one-to-two months is about the right amount of time to get thorough review and testing of this type of backwards-compatible change. Longer deadlines just mean people get busy working on other things and ignore the issue.
legendary
Activity: 2576
Merit: 1186
January 24, 2012, 01:28:39 PM
#32
I set the vote/deadline for BIP 17 soon because I figured Gavin wouldn't accept any delays. If Gavin is willing to tolerate a later schedule for BIP 17, I can update it.
legendary
Activity: 1260
Merit: 1000
January 24, 2012, 01:22:10 PM
#31
Yes, my biggest problem with either BIP is the timeframe as well.  I've disabled all of my mining daemons from broadcasting support for either BIP at the moment for my pool.  I would like to see a consensus reached and a sane timeframe proposed before re-enabling support.

Two, three, four weeks is not a sane timeframe for the types of changes proposed to be tested, vetted and deployed.
hero member
Activity: 910
Merit: 1005
January 24, 2012, 12:41:51 PM
#30
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.

This is an excellent idea. It would give everyone in the network network a longer time to upgrade. Also voting process should be standardised so you include a BIP number e.g. "/BIP_0016/" rather than "/P2SH/" or "CHC" etc.
hero member
Activity: 868
Merit: 1007
January 24, 2012, 09:28:48 AM
#29
Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

* With P2SH I'm referring to both BIP 16 and 17. I have no preference for any of them if a soft schedule is chosen.

Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.

BIP 16 was made specifically to address this. All concerns expressed in this thread are solved IMHO with a soft schedule with an automatic 55% switchover, as long as clients honors the 6-confirmation convention.

I agree (though I might suggest 64 or 70%…and maybe a month instead of 2 weeks for the activation).  It seems to me that people are compromising design out of an irrational fear of a chain fork.  If you get the overwhelming majority of people to add p2sh support (without actually activating it yet), you've then built a consensus that people want it.  If you then set a date for activation, you give everyone else that hasn't yet upgraded a chance to do so…and they will do it because the consequence of not doing so is that they end up with some unspendable coins in their wallet (it's also worth noting that the majority of coins in old wallets would actually still be usable on both chain forks long after the activation).  After activation, you can then begin the work needed to add support for these transaction in the user interface.
full member
Activity: 156
Merit: 100
Firstbits: 1dithi
January 24, 2012, 08:58:40 AM
#28
Why the hard dates?
You both are struggling and rushing because the dates you set keep coming again and again, before miners notice and upgrade. Here's an alternate idea:
  • Have P2SH* implemented, and announce P2SH support in the coinbase, but it's disabled until a certain condition is met.
  • The condition is to have 55% or more blocks in any 2016-block span announcing support for P2SH.
  • When that condition is met, the remaining 45% hashing will have two weeks to update.
  • Remove the P2SH announcement and just reject blocks with invalid P2SH transactions. Also the changes in the block limits will be made effective here.
  • A future version of the software will remove the automatic switch logic.

* With P2SH I'm referring to both BIP 16 and 17. I have no preference for any of them if a soft schedule is chosen.

Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.

BIP 16 was made specifically to address this. All concerns expressed in this thread are solved IMHO with a soft schedule with an automatic 55% switchover, as long as clients honors the 6-confirmation convention.
hero member
Activity: 686
Merit: 564
January 24, 2012, 07:20:28 AM
#27
  • Old clients and miners count each OP_CHECKMULTISIG in a scriptSig or scriptPubKey as 20 "signature operations (sigops)."  And there is a maximum of 20,000 sigops per block.  That means a maximum of 1,000 BIP-17-style multisig inputs per block.  BIP 16 "hides" the CHECKMULTISIGs from old clients, and (for example) counts a 2-of-2 CHECKMULTISIG as 2 sigops instead of 20. Increasing the MAX_SIGOPS limit would require a 'hard' blockchain split; BIP 16 gives 5-10 times more room for transaction growth than BIP 17 before bumping into block limits.
Unless I'm entirely mistaken there was a rather nasty vulnerability in OP_EVAL caused by this added bit of complexity, one that BIP 16 would've inherited if you hadn't spotted and fixed it. While technically it was only a denial of service vulnerability that prevented nodes that supported it from mining any blocks, a denial of service vulnerability of this kind is enough to create transactions spending other people's bitcoins from their P2SH addresses and get non-upgraded nodes to accept them even after the switch-on date, which is kind of a big deal.
hero member
Activity: 868
Merit: 1007
January 23, 2012, 07:31:58 PM
#26
...And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

BIP 16 explicitly states:
"Validation fails if there are any operations other than "push data" operations in the scriptSig."

Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

Bitcoin version 0.1 evaluated transactions by doing this:

Code:
Evaluate(scriptSig + OP_CODESEPARATOR + scriptPubKey)

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.

Part of the fix was to change evaluation to:

Code:
stack = Evaluate(scriptSig)
Evaluate(scriptPubKey, stack)

That gives a potential attacker much less ability to leverage some bug or flaw in the scripting system.
The only practical difference between these is that by restarting evaluation you ensure that all execution context other than stack is cleared.  I think you could have made OP_CODESEPARATOR ensure that everything other than the stack is wiped and have achieved essentially the same objective.  If you have good tests for every opcode that ensure it behaves correctly and leaves all execution context in the correct state, you could breath a little easier about such possible exploits.

Both BIP-16 and BIP-17 have an OP_CHECKSIG in the scriptSig.  It seems you're concerned about whether it's executing in the same context as the rest of the scriptSig or it's run in some other context (i.e. scriptPubKey).  It's seems the concern is the same concern that originally motivated you to split the execution of scriptSig and scriptPubKey.  It feels like an irrational fear, but maybe the implementation of the opcodes has been historically buggy and the fear is warranted.

Quote
Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.
Well that seems bad (though I can't imagine how it could be actually exploited…other than creating confusion about transaction IDs).  Is there a problem with the scope of what is getting signed?
hero member
Activity: 868
Merit: 1007
January 23, 2012, 07:12:17 PM
#25
Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...
I think just about every developer agrees with Gavin that this is not worth a blockchain fork...

So, instead of a fork, you try and create a hacky solution that old clients won't completely reject, but at the same time won't actually fully verify?  It doesn't seem so clear cut that this is preferable to a fork.  It seems like a bigger risk that you have clients passing along transactions that they aren't really validating.  And, I'm not sure a block chain fork is the end of the world.  Consider that when a fork occurs (the first block with a newer transaction type not accepted by the old miners and clients), most transactions will still be completely valid in both forks.  The exceptions are coin base transactions, p2sh transactions and any down stream transactions from those.  If you've convinced the majority of miners to upgrade before the split occurs (and miners take steps to avoid creating a fork block until some date after it's been confirmed that most miners support it), then miners that have chosen not to upgrade will quickly realize that they risk their block rewards being un-marketable coins.  So, I'm pretty sure they'll quickly update after that point.  People running non mining clients will also quickly follow when they realize mining activity on their fork is quickly dying off (but the vast majority of the transactions that appear valid to them will also be just as valid in the newer fork).  The big risk for people that haven't upgraded is that someone double spends by sending them a plain old transaction after they've already spent those coins via a new style transaction that the old clients don't accept.  But even then it might be difficult for such a transaction to propagate if the vast majority of people have upgraded.
legendary
Activity: 2576
Merit: 1186
January 23, 2012, 06:11:43 PM
#24
Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

...

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.
And BIP 16 makes this true again: the receiver now controls (to a degree) both scriptSig and "scriptPubKey". BIP 17 retains the current rules.

Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.
You can insert extra non-PUSH opcodes too, and mine them yourself... Basically, we already can put non-PUSH stuff in scriptSig, so if there is a vulnerability here, it's already in effect.

Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...
I think just about every developer agrees with Gavin that this is not worth a blockchain fork...
Pages:
Jump to: