Pages:
Author

Topic: BIP 17 - page 5. (Read 9151 times)

legendary
Activity: 1652
Merit: 2216
Chief Scientist
January 23, 2012, 05:59:37 PM
#23
...And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

BIP 16 explicitly states:
"Validation fails if there are any operations other than "push data" operations in the scriptSig."

Let me try again for why I think it is a bad idea to put anything besides "push data" in the scriptSig:

Bitcoin version 0.1 evaluated transactions by doing this:

Code:
Evaluate(scriptSig + OP_CODESEPARATOR + scriptPubKey)

That turned out to be a bad idea, because one person controls what is in the scriptPubKey and another the scriptSig.

Part of the fix was to change evaluation to:

Code:
stack = Evaluate(scriptSig)
Evaluate(scriptPubKey, stack)

That gives a potential attacker much less ability to leverage some bug or flaw in the scripting system.

Little known fact of bitcoin as it exists right now: you can insert extra "push data" opcodes at the beginning of the scriptsigs of transactions that don't belong to you, relay them, and the modified transaction (with a different transaction id!) may be mined.

Quote
As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink

Are you volunteering to make that happen? After working really hard for over four months now to get a backwards-compatible change done I'm not about to suggest an "entire network must upgrade" change...
legendary
Activity: 2576
Merit: 1186
January 23, 2012, 05:48:58 PM
#22
Why do you say there's no practical need?  One practical need is to determine a priori whether a script is too computationally expensive to be allowed.  With OP_EVAL, I could push some code on the stack and evaluate many times over…
…and then stop evaluating when you hit the limit. Even with a static-analysis limit in place, I could waste just as much of your computation time by putting just under the limit, then failing with OP_0 or such. Knowing the cost beforehand doesn't stop any known attacks.

Here is a BIP 17 transaction on testnet created with my new checkhashverify branch. Please help test/review!
hero member
Activity: 868
Merit: 1007
January 23, 2012, 05:41:28 PM
#21
Can someone remind me why BIP-12 has fallen out of favor?  OP_EVAL might add some amount of flexibility (regarding where a script is defined vs when it is actually executed), but none of these proposals seems radically different form one another.
BIP 12 cannot be statically analyzed. AFAIK no practical need for static analysis has surfaced yet, but so long as things are being rushed, it's not safe to say they won't in the future either...
Ah, right.  Why do you say there's no practical need?  One practical need is to determine a priori whether a script is too computationally expensive to be allowed.  With OP_EVAL, I could push some code on the stack and evaluate many times over…it's possible you could trivially mitigate that problem by limiting the number of allowed OP_EVALs, but it does make it difficult (if not impossible) to determining up front, in all cases, the cost of running a given script.  You could push code onto the stack that pushes more code onto the stack and executes another OP_EVAL (creating an infinite recursion that may be difficult to detect).  For reasoning similar to the omission of loops and jumps, I would be hesitant to have an OP_EVAL.  I think the code separator & checkhashverify (or ideally pushcode) is the cleaner approach.  The whole objective here is that you want a mechanism to hash the script required to spend a transaction.  OP_EVAL goes way beyond that.  And even BIP16, which also evaluates code you push on the stack, seems wrong to me (and would make the implementation more complex and static analysis more difficult).

With a general OP_CODESEPARATOR OP_PUSHCODE, you would have the flexibility of pushing any sequence of code onto the stack for the purposes of later computing a hash value, but you would never be executing code that was pushed onto the stack.  One improvement upon that would be to somehow ensure that only code which will actually execute would be hashed (to make it impossible to just push a random value onto the stack and then use its hash).  OP_CODEHASHVERIFY has that advantage, but requires that the hashed code immediately precede the operation.  It's possible I'm being overly paranoid though.

As for backward compatibility & chain forks, I think I would prefer a clean solution rather than one that is compromised for the sake of backward compatibility.  Then I would lobby to get people to upgrade to clients that accept/propagate the new transactions and perhaps create patches for some of the more popular old versions designed just to allow and propagate these new types of transactions.  Then when it's clear that the vast majority of nodes support the new transactions, declare it safe to start using them.  Any stragglers that haven't updated might find themselves off on a dying fork of the block chain…which will be a great motivator for them to upgrade.  Wink
legendary
Activity: 2576
Merit: 1186
January 23, 2012, 04:54:24 PM
#20
Can someone remind me why BIP-12 has fallen out of favor?  OP_EVAL might add some amount of flexibility (regarding where a script is defined vs when it is actually executed), but none of these proposals seems radically different form one another.
BIP 12 cannot be statically analyzed. AFAIK no practical need for static analysis has surfaced yet, but so long as things are being rushed, it's not safe to say they won't in the future either...
hero member
Activity: 868
Merit: 1007
January 23, 2012, 04:33:50 PM
#19
Can someone remind me why BIP-12 has fallen out of favor?  OP_EVAL might add some amount of flexibility (regarding where a script is defined vs when it is actually executed), but none of these proposals seems radically different form one another.
legendary
Activity: 2576
Merit: 1186
January 23, 2012, 04:10:02 PM
#18
I'm wondering, which of the two approaches is easier to deprecate?
Or is it cut in stone after it is deployed?
Imagine somebody comes up with a new way of doing multisig in a year and everybody agrees that it's the right way.
It's easy to deprecate (= stop using) just about anything. However, dropping support for it is not quite as simple - in any case, we'd need everyone to spend transactions still using it.

Also what approach makes it easier to add new features on top of?
Think of the list of features for the next couple of years and see if the discussed multisig proposals fit nicely or not.
BIP 12 is the most flexible. BIP 16 gets a one-shot at changing any aspect of the scripting system (and it's being wasted on merely recounting sigops...). BIP 17 cannot change any scripting fundamentals, but can be used for one additional check at any part of a scriptPubKey. If BIP 17 is deployed, something like a combination of BIP 12 and 16 could be still easily used in the future as an upgrade.
hero member
Activity: 496
Merit: 500
January 23, 2012, 03:49:51 PM
#17
I'm wondering, which of the two approaches is easier to deprecate?
Or is it cut in stone after it is deployed?
Imagine somebody comes up with a new way of doing multisig in a year and everybody agrees that it's the right way.

Also what approach makes it easier to add new features on top of?
Think of the list of features for the next couple of years and see if the discussed multisig proposals fit nicely or not.
hero member
Activity: 868
Merit: 1007
January 22, 2012, 01:41:28 AM
#16
This kind of coordinated upgrade should only be necessary if you're adding or changing the behavior of the opcodes.
That's exactly what this is: "adding or changing the behavior of the opcodes". A non-P2SH multisig could be released today and individual miners could start accepting it immediately. The only downside is that the newly-whitelisted transactions wouldn't be considered standard yet by most clients, so it won't be as easily relayed to the miners that would accept the transaction. Also, the transactions wouldn't be mined as quickly.
I'm aware of that.  But right after P2SH is supported, you then have to whitelist new transactions to get multi-sig.  This whitelisting process requires coordination.  I just wonder whether most (if not all) valid scripts should be allowed along with the p2sh change (subject to limitations on the number and complexity of operations).  Since the script language is not turing complete, it is possible to estimate the cost of executing a script and impose such limitations.  Otherwise, every time a new type of transaction becomes desired, you have to go through this process of coordinating the upgrade of clients to support it.
legendary
Activity: 1204
Merit: 1015
January 21, 2012, 11:48:47 PM
#15
This kind of coordinated upgrade should only be necessary if you're adding or changing the behavior of the opcodes.
That's exactly what this is: "adding or changing the behavior of the opcodes". A non-P2SH multisig could be released today and individual miners could start accepting it immediately. The only downside is that the newly-whitelisted transactions wouldn't be considered standard yet by most clients, so it won't be as easily relayed to the miners that would accept the transaction. Also, the transactions wouldn't be mined as quickly.
sr. member
Activity: 448
Merit: 252
January 21, 2012, 11:36:41 PM
#14
The more I study this, the more I'm starting to feel that allowing just a specific set of standard transaction types is an unnecessary constraint on the system.  Is it an irrational fear?  Why not allow all valid scripts?  What is the risk?

From what I understand, part of the risk is that scripts could consume unpredictably large amounts of resources (time, memory, etc.), so we try to constrain the system to put upper bounds on what is possible.  The different proposals are seeking to find the best way to add as many new features as possible (including the ones we haven't dreamed up yet), without making the scripts too "powerful."

I suppose the others can explain more, or correct me if I'm wrong. Wink
hero member
Activity: 868
Merit: 1007
January 21, 2012, 10:44:26 PM
#13
Regarding backward compatibility, I think Gavin makes good points about BIP-17 in the OP (especially the point about spend transactions automatically being considered valid no matter what by old nodes).

The more I study this, the more I'm starting to feel that allowing just a specific set of standard transaction types is an unnecessary constraint on the system.  Is it an irrational fear?  Why not allow all valid scripts?  What is the risk?  It seems like it's going to be a real problem going forward if every time someone comes up with some new type of transaction they want to craft that they need to get the entire network to upgrade in order to do it (especially with P2SH).  This kind of coordinated upgrade should only be necessary if you're adding or changing the behavior of the opcodes.  Soon after the P2SH transition happens, you're then going to need to convince miners to start accepting various multi-sig transaction types.  Each time you do this, you have to be very careful or risk serious chain split.  It seems to me that the need for more interesting transaction types, and the risk of chain split when adopting them, warrants serious study of lifting the "standard transactions" restrictions.
hero member
Activity: 868
Merit: 1007
January 21, 2012, 02:20:29 PM
#12
On further thought, I would revise this:

   scriptSig: [signature] OP_CODESEPARATOR [pubkey] OP_CHECKSIG OP_PUSHCODE
   scriptPubKey: OP_HASH160 [20-byte-hash of {[pubkey] OP_CHECKSIG} ] OP_EQUAL

To be:

   scriptSig: [signature] OP_CODESEPARATOR [pubkey] OP_CHECKSIG
   scriptPubKey: OP_PUSHCODE OP_HASH160 [20-byte-hash of {[pubkey] OP_CHECKSIG} ] OP_EQUAL

The reason is that in the previous form if you wanted to attack it, it's only necessary to reverse the hash160 function with whatever the scriptSig leaves on the stack.  In the second form, not only would you need to reverse the hash160 function, but it would also have to be a valid script sequence (a bit more secure).
legendary
Activity: 2576
Merit: 1186
January 21, 2012, 01:37:08 PM
#11
I assume that there is an implicit OP_CODESEPARATOR that sits between the ScriptSig and ScriptPubKey (the terminology here should probably be updated to better reflect what these two things really are…but I'm not sure how I would describe them).
There used to be, but before the protocol was made immutable it was changed so the scripts are instead executed in sequence with only the main stack inherited between them.

I supposed if I had it to do over, I would have created an opcode to push a segment of script onto the stack…it would push just the portion of code since the last OP_CODESEPARATOR.  Then you could have something like

   scriptSig: [signature] OP_CODESEPARATOR [pubkey] OP_CHECKSIG OP_PUSHCODE
   scriptPubKey: OP_HASH160 [20-byte-hash of {[pubkey] OP_CHECKSIG} ] OP_EQUAL

Now that I type this, I realize this would only require one new opcode, OP_PUSHCODE instead of OP_CHECKHASHVERIFY…maybe BIP18?  Like BIP17 (unlike BIP12 and BIP16), there are no special execution semantics, and OP_PUSHCODE seems a little more generally useful than OP_CHECKHASHVERIFY (which is generally desirable for things that are going to consume an opcode).
I agree this would be better, but I'm pretty sure it's impossible to make it backward compatible.
hero member
Activity: 868
Merit: 1007
January 21, 2012, 01:22:26 PM
#10
I sat down and studied these proposals this morning.  However these proposals came about should be set aside in favor of doing what's best for bitcoin.  I have no idea what transpired, so I feel my judgement isn't clouded in that respect (could be in other respects, but not that one).

Disclaimers: I've not considered any backward compatibility issues which might trump everything I have to say.  I've also not considered any implementation concerns (complexity of the code, etc).  That might also trump what I have to say.  And finally, while I think I have a good grasp of bitcoin scripting, I am by no means an expert.

I think sending coins to a script hash is good thing and perhaps the way it should have always been done, even for simple transactions.  The BIPs say that P2SH makes it possible for the recipient to define the script required to spend coins.  But here's an observation:  It has always been the case that the recipient defines the script required to spend coins.  It's just that there has, to this point, been a universal assumption that when the recipient sends you an address, that the recipient desires that a specific kind of script (a simple single signature).  It's also not true that with P2SH that the recipient doesn't need to tell the sender the form of the script that is desired.  The recipient is telling the sender the form of the script.  It's just that since the script is only executed when spending coins, the only thing necessary to communicate to the sender is a hash of the script, not the entire script.  All transactions should work this way (the sender will never have to ask whether this is a simple, single signature transaction, or a P2SH transaction…they will all be a P2SH).

I think BIP17 is a superior proposal to BIP16 and BIP12.  BIP12 (OP_EVAL) simply adds complexity to the method of executing scripts.  Validation of scripts now needs to look into the content of data push opcodes.  I can't see that it introduces any particular security holes (i.e. AFAIK there's no way to put something onto the stack that didn't originate inside the script itself …akin to a SQL injection attack).  However, I also can't see that it adds value.  BIP16 feels like it's caught somewhere between BIP12 and BIP17.  It's neither a general purpose OP_EVAL proposal and yet it still requires special handling and execution of code pushed onto the stack for no other reason than to compute its hash.  This just feels a bit hacky to me, especially in light of OP_CODESEPARATOR, which seems to be designed for this sort of thing.

I'm not sure why the use of OP_CODESEPARATOR is a concern.  It seems to me that the way BIP17 is using it is exactly what it was intended for.  It's purpose as far as I can tell is to provide a means of isolating the portions of a script that should be subject to a hash/signature from those that shouldn't (i.e. you obviously wouldn't want a signature to actually be included as part of the hash used for signing purposes).  I assume that there is an implicit OP_CODESEPARATOR that sits between the ScriptSig and ScriptPubKey (the terminology here should probably be updated to better reflect what these two things really are…but I'm not sure how I would describe them).  I supposed if I had it to do over, I would have created an opcode to push a segment of script onto the stack…it would push just the portion of code since the last OP_CODESEPARATOR.  Then you could have something like

   scriptSig: [signature] OP_CODESEPARATOR [pubkey] OP_CHECKSIG OP_PUSHCODE
   scriptPubKey: OP_HASH160 [20-byte-hash of {[pubkey] OP_CHECKSIG} ] OP_EQUAL

Now that I type this, I realize this would only require one new opcode, OP_PUSHCODE instead of OP_CHECKHASHVERIFY…maybe BIP18?  Like BIP17 (unlike BIP12 and BIP16), there are no special execution semantics, and OP_PUSHCODE seems a little more generally useful than OP_CHECKHASHVERIFY (which is generally desirable for things that are going to consume an opcode).

I don't see any issue with using OP_CHECKSIG in the ScriptSig.  And, in fact, it's in the scriptSig in both BIP16 and BIP17 (the only difference is a meaningless difference the mechanics of how it gets executed as far as I can see).  But if someone can come up with a plausible reason to be concerned, they should certainly voice it.  I don't however think it's good to make a decision based on a vague feelings (for or against).
legendary
Activity: 1652
Merit: 2216
Chief Scientist
January 21, 2012, 01:12:10 PM
#9
I haven't seen discussion of BIP 17 anywhere besides IRC, so I thought I'd start one.
I was waiting until I finished a reference implementation to post about it, but thanks for the early review. Smiley

By the way... if there is no fully-functional reference implementation yet, you really shouldn't be putting "CHV" in your coinbases yet. The string in the coinbase really aught to mean "this code is all ready to support this feature,"  because full support from a majority of hashing power is what we want to measure.

With BIP 17, both transaction outputs and inputs fail the old IsStandard() check, so old clients and miners will refuse to relay or mine both transactions that send coins into a multisignature transaction and transactions that spend multisignature transactions.
Since scriptSigs must always follow scriptPubKey, does this really make a big difference? ie, if people can't send them, they can't receive them anyway.

Imagine you're an early adopter.  You ask people to send you money into your spiffy new ultra-secure wallet.

With BIP 16, transactions TO you will take longer to get into a block because not everybody is supporting the new feature.

But transactions FROM you will look like regular transactions, so the people you are paying won't have to wait.

That is not a big difference, but it is an advantage of the BIP 16 approach.

OP_CHECKSIG feels like it was originally designed to be in the scriptPubKey-- "scriptSig is for signatures." Although I can't see any way to exploit an OP_CHECKSIG that appears in the scriptSig instead of the scriptPubKey, I'm much less confident that I might have missed something.
It's evaluated the exact same way in all 3 scripts, and already accepted in scriptPubKey. If there is an attack vector here (which seems very unlikely), it is there both with or without BIP 17.

No, they are not evaluated in the same way.  The bit of code in bitcoin transaction validation that makes me nervous is:
Code:
    txTmp.vin[nIn].scriptSig = scriptCode;
... in SignatureHash(), which is called from the CHECKSIG opcodes.  scriptCode is the scriptPubKey from the previous (funding) transaction, txTmp is the transaction being funded.

This is the "Copy the scriptPubKey into the scriptSig before computing the hash that is signed" part of what OP_CHECKSIG does.

I like BIP 16 better than OP_EVAL/BIP 17 because BIP 16 does two complete validations, once with the scriptPubKey set to HASH160 OP_EQUAL and then once again with the scriptPubKey set to (for example) OP_CHECKSIG.

BIP 16 essentially says "If we see a P2SH transaction, validate it, then treat it is a normal, standard transaction and validate it again."

BIP 17 will run OP_CHECKSIGs when it is executing the scriptSig part of the transaction, which is a completely different context from where they are executed for the standard transactions we have now.

Again, I can't see a way to exploit that but it makes me very nervous.
legendary
Activity: 2576
Merit: 1186
January 21, 2012, 11:36:29 AM
#8
Litecoin
How about we leave scams out of this? Off topic.
hero member
Activity: 496
Merit: 500
January 21, 2012, 11:31:40 AM
#7
What if we implement both approaches on two different networks: BIP16 on Bitcoin and BIP17 on Litecoin.
I consider Litecoin a healthy alternative to Bitcoin, it might be lacking some marketing as of now, but that might change in the future.
This way we will have a backup solution for Bitcoin in case the approach taken was a mistake.
legendary
Activity: 2576
Merit: 1186
January 20, 2012, 05:19:00 PM
#6
Is either proposal easier for a client to manage?  It looks like both require the client to keep track of the script since it is not in the blockchain.  Is this correct?  Is there a potential to "lose" the script, or can it be recovered in some way?
That's the same for all 3 BIPs; the client already needs to manage a private key. The scripting sugar is easy to reproduce if you have that.
hero member
Activity: 742
Merit: 500
January 20, 2012, 05:16:34 PM
#5
I'm glad to see discussion about these proposals.  It's always nice to have working code now, but considering we are building the currency of the future, lets take time and make sure we do it right.

I'm still not sure which proposal I like the best.

Is either proposal easier for a client to manage?  It looks like both require the client to keep track of the script since it is not in the blockchain.  Is this correct?  Is there a potential to "lose" the script, or can it be recovered in some way?
newbie
Activity: 22
Merit: 0
January 20, 2012, 02:26:28 PM
#4
I support BIP 17 or a proposal like it. I think that a standard method should be used. Worrying about old clients is not a valid concern, IMO. Bitcoin is still evolving. If every change had to worry about old clients, then in 10 years the bitcoin protocol is going to be a mess. Creating hackish instead of elegant solutions will only make the bitcoin protocol a convoluted mess several years down the road.

On release, a big warning can be placed on it to not use the feature until > 50% of clients support. Easy as that. Bitcoin has survived years without these features. It will survive a couple more months for a solid implementation to be created / adopted.
Pages:
Jump to: