Author

Topic: Using OP_CAT (Read 4574 times)

sr. member
Activity: 352
Merit: 252
https://www.realitykeys.com
September 09, 2014, 09:11:03 PM
#46
You just have to have alice and bob prove that they know the discrete logs of their own pubkeys (e.g. they sign a message with it while setting up their exchange). Work through the algebra and you'll see why.

Ah, sweet.

I guess we already fulfill this in my demo example (this script run with the --ecc-voodoo flag):
https://github.com/edmundedgar/realitykeys-examples/blob/master/realitykeysdemo.py
...because for convenience we have Alice and Bob temporarily queue up the funds that they intend to put into the contract transaction by paying to the hash of their own pubkey, so the contract transaction never happens unless they can both fund it by signing with the private keys of the pubkeys they've exchanged.

If you have logs for #bitcoin from last year, I walked through the somewhat different pay to contract case, which is I think mostly what you're referring to-- in that case you alwas make the contract key G*HMAC(message,pubkey) to avoid someone setting their pubkey to an alternative contract.

I haven't been able to find this so any more hints would be helpful, although I'm learning a lot wading through various IRC logs looking for it...
staff
Activity: 4326
Merit: 8951
September 07, 2014, 02:17:29 PM
#45
You just have to have alice and bob prove that they know the discrete logs of their own pubkeys (e.g. they sign a message with it while setting up their exchange). Work through the algebra and you'll see why.

If you have logs for #bitcoin from last year, I walked through the somewhat different pay to contract case, which is I think mostly what you're referring to-- in that case you alwas make the contract key G*HMAC(message,pubkey) to avoid someone setting their pubkey to an alternative contract.
sr. member
Activity: 352
Merit: 252
https://www.realitykeys.com
September 06, 2014, 10:26:07 PM
#44
If you really just want something conditionally redeemable by one person or another, I would recommend the transaction type I recommend for reality keys:

Reality keys will reveal private key A if a true/false fact is true, and private key B if it's false.

Alice and Bob want to make a contract to hedge the outcome of a fact because they each have opposing short positions relative to the fact.  Alice will be paid if the fact is true, Bob will be paid if the fact is false.

Reality keys publishes the pubkey pairs  a := gA ; b := gB

Alice has private key X and corresponding pubkey x, Bob has private key Y and corresponding pubkey y.

Alice and Bob compute new pubkeys  q:=x+a  and r:=y+b  and they send their coins to a 1 of 2 multisig of those new pubkeys, q,r.

The values q,r are zero-knoweldge indistinguishable from a and b unless you know x and/or y, so no one except alice and bob, not even reality keys can tell which transaction on the network is mediated by the release of A vs B.

Later, realitykeys releases A or B,  lets say alice wins.  She computes a new private key X+A, and uses it to redeem the multisig.  Bob cannot redeem the multisig because he knows neither X or B.

This looks like a perfectly boring transaction to everyone else. Alice and Bob collectively cannot be robbed by a third party, though they could be held up or if realitykeys conspires with Alice or Bob then there could be cheating. This risk could be reduced by using a threshold of multiple observers— which this scheme naturally extends to.

Sorry for hijacking the thread over this but just one thing to add on this pattern to avoid people getting into trouble - hopefully gmaxwell will correct me if I've got this wrong.

Since these x + a and y + b operations are invertible, you need to be careful that Alice doesn't know x, and Bob doesn't know y, until they've shown each other their own keys, a and b respectively. If they know the Reality Keys pubkeys in advance, it seems to be possible to make a special tricksy a or b key such that, when combined to make q or r, the owner could get the combined private key Q or R without knowing the relevant Reality Key private key X or Y.

See Alan Reiner's comment here:
http://permalink.gmane.org/gmane.comp.bitcoin.devel/4173

In theory it looks like everything should be OK as long as you don't register the Reality Key until after you've seen the other party's public key, but if you're able to avoid this restriction it would be safer; Not only do you have to make sure you do everything in the right order, you also have to rely on us (Reality Keys) keeping our public keys secret until after you've made the exchange. The public keys are indeed supposed to be secret until they're allocated, but they're harder for us to secure than the private keys, since we need to be able to give them out to people on demand in real time. (Also there are some benefits to the same keys being shared for multiple people's contracts, rather than being assigned uniquely every time.)

I suspect you could defeat tricksy key attacks by Alice and Bob each producing an extra private key, sending it to their counter-party and requiring them to combine it with their Reality Key in a non-invertible operation, but I'm too far out of my depth to assess this. (It might also have some privacy benefits, so I'd be interested to hear what knowledgeable people think.) For now when I've been building demo applications on Reality Keys I've gone with the non-standard branching transaction approach, since sending these transactions to Eligius is practical at the moment and they should become standard in the next major Bitcoin release.
newbie
Activity: 12
Merit: 0
September 05, 2014, 06:43:20 PM
#43
I'd just like to emphasize that it is the core of my last post.  Can anyone answer that?

There is a plan that P2SH will effectively accept nearly all scripts.

https://github.com/gavinandresen/bitcoin-git/commit/7f3b4e95695d50a4970e6eb91faa956ab276f161
https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L692

https://gist.github.com/gavinandresen/88be40c141bc67acb247

The limit is that your script must be less than 520 bytes and can't have more than 15 signature operations.

The update will presumably be in the next major release, since it is part of the main branch.

Does that mean yes or no?

What does it mean and imply that it's part of the main branch?

Is there ETA for the next major release?
legendary
Activity: 1232
Merit: 1094
September 05, 2014, 05:32:27 PM
#42
I'd just like to emphasize that it is the core of my last post.  Can anyone answer that?

There is a plan that P2SH will effectively accept nearly all scripts.

https://github.com/gavinandresen/bitcoin-git/commit/7f3b4e95695d50a4970e6eb91faa956ab276f161
https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L692

https://gist.github.com/gavinandresen/88be40c141bc67acb247

The limit is that your script must be less than 520 bytes and can't have more than 15 signature operations.

The update will presumably be in the next major release, since it is part of the main branch.
newbie
Activity: 12
Merit: 0
September 05, 2014, 02:01:40 PM
#41
Code:
inputs: pubkey signature secret
OP_DUP OP_EQUAL
OP_IF OP_ELSE OP_EQUALVERIF OP_ENDIF
OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
That script is perfectly standard as a P2SH in current code.

Though I suspect you've confused the operation of the machine somewhat.


The script may be perfectly fine, but would bitcoin mainnet nodes broadcast transactions that contain that script?

I'd just like to emphasize that it is the core of my last post.  Can anyone answer that?
newbie
Activity: 12
Merit: 0
September 04, 2014, 09:43:55 PM
#40
Code:
inputs: pubkey signature secret
OP_DUP OP_EQUAL
OP_IF OP_ELSE OP_EQUALVERIF OP_ENDIF
OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
That script is perfectly standard as a P2SH in current code.

Though I suspect you've confused the operation of the machine somewhat.


The script may be perfectly fine, but would bitcoin mainnet nodes broadcast transactions that contains that script?


I need a definition of what a standard script is and what a non-standard script is; the bitcoind gives me:
Code:
"scriptPubKey" : {
"asm" : "OP_DUP aaaaaaaaaa OP_IF bbbbbbbbbb OP_ELSE cccccccccc OP_EQUALVERIFY dddddddddd OP_ENDIF OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL",
"hex" : "7605aaaaaaaaaa6305bbbbbbbbbb6705cccccccccc8805dddddddddd687bad7ca987",
"type" : "nonstandard"
}

And I think I missed a few OP_DROPs
What operation do you suspect I am confusing?
The stack goes like this:
inputspubkey signature password
OP_DUPpubkey pubkey signature password
pubkeyA pubkey pubkey signature password
OP_EQUALisAlice pubkey signature password
OP_IFpubkey signature password
BobsHash pubkey signature password
OP_ELSE
pubkeyB pubkey signature password
OP_EQUALVERIFisBob pubkey signature password
OP_DROPpubkey signature password
AlicesHash pubkey signature password
OP_ENDIFHashY pubkeyX signature passwordY
OP_ROTpubkeyX signature HashY passwordY
OP_CHECKSIGVERIFYtrue HashY passwordY
OP_DROPHashY passwordY
OP_SWAPpasswordY HashY
OP_HASH160PasswordY_hash HashY
OP_EQUALSignature-Password match

I have seen OP_EVAL in some places, as well as other OPs that I don't find at https://en.bitcoin.it/wiki/Script.  What and where do I find documentation about those?

Quote
I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways.
I'm not asking you to prove that OP_CAT is necessary,  I'm asking you to describe a specific, complete, protocol for which it is sufficient— something that starts with Alice and Bob and Charlie who want to accomplish a task, and a series of specific messages they send, and a series of guaranteed outcomes. Then I could try to help you reimagine a functionally equivalent protocol without it.

Starting from the fact that I don't have OP_CAT and that I don't count of having it, I focus on finding alternative ways rather than to develop protocols that requires something I don't have.  I'll give an example if (and when) I get there; what I have is still too incomplete and abstract.

Quote
As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing.
It sounds like you're describing an atomic swap or a related transaction. Often they don't need two hashes.  If you really just want something conditionally redeemable by one person or another, I would recommend the transaction type I recommend for reality keys:

Yes, atomic transaction; for balance update or for winning-losing a dice bet.  A third party is unacceptable, which rules out the reality keys.
legendary
Activity: 2128
Merit: 1074
September 04, 2014, 05:33:50 PM
#39
Not so. We proved there were no semantic leaks from OpenSSL in the numbers on the stack via exhaustive testing some time ago and removed all use of OpenSSL from the script code (except the calls out for signature verification, of course— so only signature verification and the accompanying signature serialization are handled by it).
So this "semantic leak" is now only apparent in the block layout on the wire and on the disk? But the "abstract virtual machine" of Bitcoin script cannot discover its internal bit ordering? Do I understand you right?

Quote
1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations
Also not so, very intentionally not.
Then can you state again what is the possible attack that the "opcode limit" is protecting against?

Thanks.
staff
Activity: 4326
Merit: 8951
September 04, 2014, 05:19:28 PM
#38
1a) implicit conversions between integers and bit strings with semantics depending on precise detail of OpenSSL implementation (word size, word order in a large integer, byte order in a word)
Not so. We proved there were no semantic leaks from OpenSSL in the numbers on the stack via exhaustive testing some time ago and removed all use of OpenSSL from the script code (except the calls out for signature verification, of course— so only signature verification and the accompanying signature serialization are handled by it).
Quote
1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations
Also not so, very intentionally not.
full member
Activity: 179
Merit: 157
-
September 04, 2014, 05:15:25 PM
#37
But I believe we have already discussed the "non-TC" chestnut here and the consensus was that one can abuse P2SH to escape the "no-loops" restriction.

You can't use P2SH to create loops and nobody said anything about loops anyway.
legendary
Activity: 2128
Merit: 1074
September 04, 2014, 05:09:34 PM
#36
We are not talking about von Neumann architecture. We are talking about a small non-TC stack machine without mutability and a fixed opcode limit. In this case the set of allowable programs absolutely does shrink, and more importantly, the space of accepting inputs for (most) given scripts shrinks. This is easy to see --- consider the program OP_VERIFY. There would be one permissible top stack element in a typed script; in untyped script every legal stack element in (0x80|0x00)0x00* is permissible.

That said, nobody actually said that anything about the space of provable programs. What I said is that script would be easier to analyze. This is obviously true because of the tighter restrictions on stack elements, as I already illustrated. As another example, consider the sequence OP_CHECKSIG OP_CHECKSIG which always returns zero. One reason this is true today is that the output of OP_CHECKSIG always has length one while the top element of its accepting input always has length > one. To analyze script today you need to carry around these sorts of length restrictions; with typing you only need to carry around the data that CHECKSIG's output is boolean and its input is a bytestring.

I'm sorry I haven't kept with the advances in the theoretical computer science. But I believe we have already discussed the "non-TC" chestnut here and the consensus was that one can abuse P2SH to escape the "no-loops" restriction.

Let me try to dig the thread and I will edit this message later. Edit:

https://bitcointalksearch.org/topic/m.6533466

The operative words were "opcode limit" in the "Turing complete language vs non-Turing complete (Ethereum vs Bitcoin)" thread.

legendary
Activity: 2128
Merit: 1074
September 04, 2014, 05:04:26 PM
#35
I absolutely agree that additional type data makes for software which is easier to analyze. The question isn't the result of the program being provable, the question is of the implementations of the interpreter being simple enough to have even a small chance of having multiple absolutely identically behaving implementations, since we are performing this inside of a consensus system.

You continue to miss the point completely.
I apologize for writing too ambiguously the first time. I'm going to try to linearize my thoughts better now:

1) Given the current Bitcoin script language with the following problems (amongst others):

1a) implicit conversions between integers and bit strings with semantics depending on precise detail of OpenSSL implementation (word size, word order in a large integer, byte order in a word)
1b) ostensibly allowing emulating iteration by mutual recursion of P2SH invocations

2) a non-binary compatible but only morally-compatible scripting language featuring:

2a) explicit type conversion operators and type tagging of the stack storage, in particular clean conversions between integers and bit strings
2b) somehow type-safe or type-checking implementation of P2SH invocation that verifies both arguments and return values

3) will allow writing a completely new scripting interpreter

3a) in a theoretically strong programming language like a Lisp subset that is provable (Lisp because I'm most familiar with it, but there are many other candidates, I did not keep up with recent developments in the theoretical computer science)
3b) that can be mechanically/automatically verified and proven to obey certain theorems and conditions

4) said interpreter then can be translated

4a) to C/C++/Java/etc. via completely mechanical translation or manual pattern-based transliteration of a very restricted subset Lisp to be incorporated in a software-only implementation
4b) to SystemC/Verilog/VHDL/etc. to be synthesized into a logic circuit (with stack memory) for the hardware-assisted implementations and for additional verification

The 4a) output in a restricted C++ subset could then replace the current, completely improvised, implementation in Bitcoin core. Because of subset C++ use it most likely would be longer in terms of lines of code, but it would be also much simpler to analyze.

The 3b) step has an additional problem that all the existing Lisp provers use only conventional ring of integers arithmetic. Since Bitcoin depends on an elliptic curve over a finite field the proving software would have to be extended to efficiently handle that. From my school days algebra I remember that the stratification group->ring->field significantly influences the complexity of proofs. Sliding back from "ring of integers" to "abelian group of elliptic curves" could potentially greatly reduce the set of theorems that could be mechanically proven.

I realize that the points 1-4 still read like a complex sentence in a patent application. I'm not good at writing easy to read essays. But from the purely technical point of view the two-level process is the way to maximize correctness (1st language for proving/verification, 2nd language for implementation/integration).
 
full member
Activity: 179
Merit: 157
-
September 04, 2014, 07:58:38 AM
#34
This claim about "typed data" and "provability" is false. There are actual proofs of that coming from the people involved in designing/implementing Algol 68. I don't have any references handy, but in broad terms the progression "classic Von Neumann" -> "type-tagged Von Neumann" -> "static-typed Von Neumann/Harvard modification" strictly increases the set of programs that have provable results.

We are not talking about von Neumann architecture. We are talking about a small non-TC stack machine without mutability and a fixed opcode limit. In this case the set of allowable programs absolutely does shrink, and more importantly, the space of accepting inputs for (most) given scripts shrinks. This is easy to see --- consider the program OP_VERIFY. There would be one permissible top stack element in a typed script; in untyped script every legal stack element in (0x80|0x00)0x00* is permissible.

That said, nobody actually said that anything about the space of provable programs. What I said is that script would be easier to analyze. This is obviously true because of the tighter restrictions on stack elements, as I already illustrated. As another example, consider the sequence OP_CHECKSIG OP_CHECKSIG which always returns zero. One reason this is true today is that the output of OP_CHECKSIG always has length one while the top element of its accepting input always has length > one. To analyze script today you need to carry around these sorts of length restrictions; with typing you only need to carry around the data that CHECKSIG's output is boolean and its input is a bytestring.
staff
Activity: 4326
Merit: 8951
September 04, 2014, 01:39:55 AM
#33
Code:
inputs: pubkey signature secret
OP_DUP OP_EQUAL
OP_IF OP_ELSE OP_EQUALVERIF OP_ENDIF
OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL
That script is perfectly standard as a P2SH in current code.

Though I suspect you've confused the operation of the machine somewhat.

Quote
I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways.
I'm not asking you to prove that OP_CAT is necessary,  I'm asking you to describe a specific, complete, protocol for which it is sufficient— something that starts with Alice and Bob and Charlie who want to accomplish a task, and a series of specific messages they send, and a series of guaranteed outcomes. Then I could try to help you reimagine a functionally equivalent protocol without it.

Quote
As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing.
It sounds like you're describing an atomic swap or a related transaction. Often they don't need two hashes.  If you really just want something conditionally redeemable by one person or another, I would recommend the transaction type I recommend for reality keys:

Reality keys will reveal private key A if a true/false fact is true, and private key B if it's false.

Alice and Bob want to make a contract to hedge the outcome of a fact because they each have opposing short positions relative to the fact.  Alice will be paid if the fact is true, Bob will be paid if the fact is false.

Reality keys publishes the pubkey pairs  a := gA ; b := gB

Alice has private key X and corresponding pubkey x, Bob has private key Y and corresponding pubkey y.

Alice and Bob compute new pubkeys  q:=x+a  and r:=y+b  and they send their coins to a 1 of 2 multisig of those new pubkeys, q,r.

The values q,r are zero-knoweldge indistinguishable from a and b unless you know x and/or y, so no one except alice and bob, not even reality keys can tell which transaction on the network is mediated by the release of A vs B.

Later, realitykeys releases A or B,  lets say alice wins.  She computes a new private key X+A, and uses it to redeem the multisig.  Bob cannot redeem the multisig because he knows neither X or B.

This looks like a perfectly boring transaction to everyone else. Alice and Bob collectively cannot be robbed by a third party, though they could be held up or if realitykeys conspires with Alice or Bob then there could be cheating. This risk could be reduced by using a threshold of multiple observers— which this scheme naturally extends to.
staff
Activity: 4326
Merit: 8951
September 04, 2014, 01:22:40 AM
#32
I absolutely agree that additional type data makes for software which is easier to analyze. The question isn't the result of the program being provable, the question is of the implementations of the interpreter being simple enough to have even a small chance of having multiple absolutely identically behaving implementations, since we are performing this inside of a consensus system.

You continue to miss the point completely.
legendary
Activity: 2128
Merit: 1074
September 04, 2014, 12:35:23 AM
#31
Typed data on the stack makes writing correct code much harder, I can't say that I've ever wished for that. I general prefer the stack be "bytes" and everything "converts" them to the right type. Yes, additional constraints would make things like your provably undependable code easier, but they do so by adding more corner cases that an implementation must get right.

I'm also a fan of analyizability, but that always has to be second seat to consensus safeness. Smiley
This claim about "typed data" and "provability" is false. There are actual proofs of that coming from the people involved in designing/implementing Algol 68. I don't have any references handy, but in broad terms the progression "classic Von Neumann" -> "type-tagged Von Neumann" -> "static-typed Von Neumann/Harvard modification" strictly increases the set of programs that have provable results. I also remember than in the USA IBM did pay for some academic research about "PL/I without implicit type coercion" that had similar results.

As an aside to the theoretical results: in school I had side income helping debug/fix/extend several RPN-style / Forth-style language interpreters including then-popular commercial implementations by Tektronix & HP in their IEEE-488 lab-control equipment. For that application type-tagging was (and is) a godsend both for human programming and for automated program analysis/translation.
newbie
Activity: 12
Merit: 0
September 04, 2014, 12:08:42 AM
#30
Assuming OP_CAT is still available,

Right there, I started feeling skeptical about your post.


How is that useful?  The problem is not about being able to do nice scripts and use them properly, but about avoiding the possibility of making any script that could potentially... get naughty.


Call it OP_LIMITCAT, and disallow the use of a bare OP_CAT.

How is that better than implementing OP_CAT correctly in the first place?  Wouldn't that "alias" require the nasty bare OP_CAT to be "present"?  It sounds like you are saying "Instead of doing the checks in the OP_CAT itself, let's make an improper OP_CAT that we cannot use directly, and make an alias that does the checks before calling the improper OP_CAT that doesn't do these checks"

Am I totally misunderstanding you?
newbie
Activity: 12
Merit: 0
September 03, 2014, 11:24:09 PM
#29
gmaxwell, you remind that part of me that says "not so fast..." to "of course..." statements.  You are thoughtful; thanks for your help and insight!


I'm not seeing how OP_CAT (at least by itself) facilitates any of the high level examples there. Can you give me a specific protocol and set of scripts to show me how it would work?

I am simultaneously to prove that I need OP_CAT and trying to find a way to do without it. I guess I can't have it both ways.

In very very short, it involved verifying that hash(salt+data) indeed equals [suchhash]. Providing salt and data, the script can confirm hash(salt), hash(data) and hash(salt+data) and validates the transaction based of whether the hashes match what was claimed. However, I think I found a way around that, which will involve more never-broadcasted transaction and more multsig addresses. etc.  i.e. the process is more complex and exhaustive, but I think the OPs available allows an alternative implementation.

As I fell like I'm finding my way toward "a solution without OP_CAT", the next biggest wall seems to be: Even if I can make scripts that do exactly what I want, will the network accept and broadcast them?

I am about to start experimenting on testnet, but even if it works on testnet, that doesn't tell me if it is going to work on the mainnet.

The OP_CAT would indeed simplify the process of what I am thinking of, but it seems that the main scenarios would be resolved, i.e. I wouldn't "need" OP_CAT, even though it would have made it easier.

One of the protocol is about two parties escrowing money for future instant payments, i.e. off-net renegotiation of the balances.  An extended version could allow decentralised banking for Bitcoin.  (isn't that what Bitcoin IS?)  Yes, but I am talking about instant proof/guarantee of receiving a minimum of n confirmations, after escrow/deposits are placed.

It seems that it would only require one non-standard script, which would look like this:
Code:
inputs: pubkey signature secret
OP_DUP OP_EQUAL
OP_IF OP_ELSE OP_EQUALVERIF OP_ENDIF
OP_ROT OP_CHECKSIGVERIFY OP_SWAP OP_HASH160 OP_EQUAL

As it is, this would allow either of two party to claim the output if they can provide the other's party secret, secret which hash to the hardcoded hashA or hashB, depending of who is signing.
(How is that useful is part of a bigger picture that I will talk about later in another thread.)
This script is untested and incomplete; I also needs to allow both signatures to validate the transaction.

So the question is now: should I bother testing that on testnet or is it doomed because the network wouldn't like it, e.g. too many nodes not broadcasting unknown, strange and/or non-standard transactions?
legendary
Activity: 1792
Merit: 1122
September 03, 2014, 11:08:37 PM
#28
I'm not a programmer so this may sound very stupid:
[...]
Max OP_CAT output size = 520 bytes: why risky?
I mean, is there any fundamental difference between these cases?
All the limits are risks, all complexity— practically every one of them has been implemented incorrectly by one alternative full node implementation or another (or Bitcoin core itself) at some point. They miss them completely, or count wrong for them, or respond incorrectly when they're violated.  E.g. here what happens if you OP_CAT 520 and 10 bytes? Should the verify fail? Should the result be truncated? But even that wasn't the point here.

Point here was that realizing you _needed_ a limit and where you needed it was a risk.  The reasonable and pedantically correct claim was made that OP_CAT didn't increase memory usage, that it just took two elements and replaced them with one which was just as large as the two... and yet having (unfiltered) OP_CAT in the instruction set bypasses the existing limits and allowed exponential memory usage.

None of it insurmountable, but I was answering the question as to why it's not just something super trivial.

Assuming OP_CAT is still available, we can do everything with existing OP codes:

Code:


If the size of A, size of B, and the sum of size of A and B are all less than or equal to 520, it will return

Otherwise, the script fails.

So we can create an alias for this part:

Code:
SIZE ROT SIZE ROT 2DUP <520> LESSTHANOREQUAL VERIFY <520> LESSTHANOREQUAL VERIFY ADD <520> LESSTHANOREQUAL VERIFY SWAP CAT

Call it OP_LIMITCAT, and disallow the use of a bare OP_CAT.

Unless bugs are already there in the existing OP codes, or in my script, that should be fine.

staff
Activity: 4326
Merit: 8951
September 03, 2014, 08:28:13 PM
#27
Well guys, I broke theoretical bitcoin. My lack of relevant knowledge has theoretically doomed us all.
In all seriousness (not that breaking theoretical bitcoin isn't) the whole take-down-the-network-in-one-transaction is scary as shit. I'd love to be able to use string functions, but I'd rather not advocate risking the network for some silly scriptsigs Tongue
I send you my theoretical condolences. Smiley

No worries, everyone breaks theoretical Bitcoin.
legendary
Activity: 1386
Merit: 1053
Please do not PM me loan requests!
September 03, 2014, 07:48:35 PM
#26
Well guys, I broke theoretical bitcoin. My lack of relevant knowledge has theoretically doomed us all.
In all seriousness (not that breaking theoretical bitcoin isn't) the whole take-down-the-network-in-one-transaction is scary as shit. I'd love to be able to use string functions, but I'd rather not advocate risking the network for some silly scriptsigs Tongue
staff
Activity: 4326
Merit: 8951
September 03, 2014, 07:44:19 PM
#25
Typed data on the stack makes writing correct code much harder, I can't say that I've ever wished for that. I general prefer the stack be "bytes" and everything "converts" them to the right type. Yes, additional constraints would make things like your provably undependable code easier, but they do so by adding more corner cases that an implementation must get right.

I'm also a fan of analyizability, but that always has to be second seat to consensus safeness. Smiley
full member
Activity: 179
Merit: 157
-
September 03, 2014, 06:17:16 PM
#24
Well, there can only be one OP_CHECKSIG...

That is false. You even could do threshold signatures with multiple OP_CHECKSIGs if you wanted to be a goof.

Quote
This requires setting a size for each data type.  I think it is basically integer, byte array and boolean (which is an int).

Script is not typed; there is only one type, "raw byte data", that is interpreted in various ways by the various opcodes. (This makes accounting quite easy actually.) And today you are required to match the byte representation of all stack objects exactly, since OP_EQUAL requires it, so arguably a total stack size limit would be an easy thing to describe precisely.

My biggest metawish for a script 2.0 would be ease of analysis.... in particular I would like separate types (uint, bool, bytedata) and explicit casts between them. I spent quite a bit of time working on script satisfiability analysis recently, and it seems the best way to describe abstract stack elements is as a bundle of complementary bounds on numeric values, boolean values, length, etc.

Bitcoin-ruby uses a typed script and has each opcode do casts.. the idea makes me smile but for consensus code it is really not appropriate, sadly. They have plans one day to replace it with a more-or-less direct port of bitcoind's script parser.
staff
Activity: 4326
Merit: 8951
September 03, 2014, 06:11:18 PM
#23
Well, there can only be one OP_CHECKSIG...
Thats not true.

Quote
Why not make that kind of limit for OP_CAT?
All the string functions, in fact, should be enabled (even if they are "expensive words" like checksig)
What if there was a minimum base transaction fee (rendering a tx with an insufficient base fee invalid) that would be incremented by a certain amount for every OP_CAT in the transaction?

No one is saying that things like OP_CAT cannot be done, or that they're bad or whatever. But making them not a danger requires careful work. Case in point: What you're suggesting is obviously broken.  So I write a transaction which pays 100x that (presumably nominal fee) and I crash _EVERY BITCOIN SYSTEM ON THE NETWORK_, and I don't really have to pay the fee at all because a transaction needing a zillion yottabytes of ram to verify will not be mined, so I'll be free to spend it later.  Congrats, you added a severe whole network crashing vulnerability to hypothetical-bitcoin.

You should also remove "enabled" from your dictionary, that those opcodes were "disabled" doesn't mean they can just be enabled. They're completely gone— adding them is precisely equivalent to adding something totally novel in terms of the required deployment procedure.
legendary
Activity: 1386
Merit: 1053
Please do not PM me loan requests!
September 03, 2014, 05:53:28 PM
#22
Well, there can only be one OP_CHECKSIG...
Why not make that kind of limit for OP_CAT?
All the string functions, in fact, should be enabled (even if they are "expensive words" like checksig)
What if there was a minimum base transaction fee (rendering a tx with an insufficient base fee invalid) that would be incremented by a certain amount for every OP_CAT in the transaction?
staff
Activity: 4326
Merit: 8951
September 03, 2014, 05:02:54 PM
#21
Set a maximum total memory for the stack and a script that exceeds that value automatically fails.
Sure, but this requires: a consistent way of measuring it and enforcing it, and being sure that no operation has unlimited intermediate state.

As Bitcoin was originally written it was thought that it had precisely that: There was a limit on the number of pushes, and a limit on the number of operations. This very clearly makes the stack size "limited", but because of operations that allow exponential growth, the limit wasn't useless.  Being absolutely sure that the limits imposed are effective isn't hard for any fundamental reason, as I keep pointing out. "Just have a limit", but being _sure_ that the limit does what you expect is much harder than it seems.

legendary
Activity: 1232
Merit: 1094
September 03, 2014, 04:26:30 PM
#20
And then I thought, "Why did I do that? To stop stack size explosions, I guess...but did I stop that? No idea." because I would have to analyze the interactions with every single other part of the script system, and neither I nor anybody else has a complete model there. (Though I *think* I have a complete model of how everything affects the size of stack objects, at least right now.) So that's where the risk comes from.

Why indirectly solve the problem?  The problem is limited stack size, so why not just directly solve it?

Set a maximum total memory for the stack and a script that exceeds that value automatically fails.

This requires setting a size for each data type.  I think it is basically integer, byte array and boolean (which is an int).

A slightly less direct method is to focus on the opcodes that can increase the stack size.  Even then, it is still really a global limit.
full member
Activity: 179
Merit: 157
-
September 03, 2014, 03:53:37 PM
#19
I'd add that the blocksize and scriptsize limits are easy to check -- you just have to look at the data on the wire, not understand it at all. Any limit on script stack objects is going to be nasty: script is an intricate machine with many, many weird corner cases that interact in surprising (at least) ways.

Quote from: gmaxwell
Point here was that realizing you _needed_ a limit and where you needed it was a risk.

This is probably the most important point. I had initially typed up a long reply explaining what needs to be done for OP_PUSH or OP_CAT limits (which are conceptually not so bad, but you have to be exhaustive and expect all implementors to be identically exhaustive). And then I thought, "Why did I do that? To stop stack size explosions, I guess...but did I stop that? No idea." because I would have to analyze the interactions with every single other part of the script system, and neither I nor anybody else has a complete model there. (Though I *think* I have a complete model of how everything affects the size of stack objects, at least right now.) So that's where the risk comes from.
staff
Activity: 4326
Merit: 8951
September 03, 2014, 12:30:50 PM
#18
I'm not a programmer so this may sound very stupid:
[...]
Max OP_CAT output size = 520 bytes: why risky?
I mean, is there any fundamental difference between these cases?
All the limits are risks, all complexity— practically every one of them has been implemented incorrectly by one alternative full node implementation or another (or Bitcoin core itself) at some point. They miss them completely, or count wrong for them, or respond incorrectly when they're violated.  E.g. here what happens if you OP_CAT 520 and 10 bytes? Should the verify fail? Should the result be truncated? But even that wasn't the point here.

Point here was that realizing you _needed_ a limit and where you needed it was a risk.  The reasonable and pedantically correct claim was made that OP_CAT didn't increase memory usage, that it just took two elements and replaced them with one which was just as large as the two... and yet having (unfiltered) OP_CAT in the instruction set bypasses the existing limits and allowed exponential memory usage.

None of it insurmountable, but I was answering the question as to why it's not just something super trivial.
legendary
Activity: 1792
Merit: 1122
September 03, 2014, 04:31:15 AM
#17


When behavior like this is fixed via limits great care must be taken to make sure the limits are implemented absolutely consistently everywhere or the result is a consensus splitting risk. Alt full node implementers have repeatedly implemented the limits wrong— even when they're obvious in the code, called out in the comments, documented on the wiki, etc... even by just simply not implementing them (... coverage analysis and testing against the blockchain can't tell you about limits that you're just missing completely).



I'm not a programmer so this may sound very stupid:

MAX_BLOCK_SIZE = 1MB: not risky(?)
Max push size = 520 bytes: not risky(?)
Max script size = 10000 bytes: not risky(?)
Max OP_CAT output size = 520 bytes: why risky?

I mean, is there any fundamental difference between these cases?
staff
Activity: 4326
Merit: 8951
September 01, 2014, 10:30:40 PM
#16
Sure sure, as I said about it's not hard in theory, but theres the lesson— even after I pointed out there was exponential memory usage in a naive implementation the OP thought otherwise.  And before anyone else trips over your egos, it's not that the OP was foolish or anything. There are subtle interactions in the dark corners which make making promises about the behavior difficult.  So while the actual safe behavior isn't fundamentally hard, being confident that all the corner cases and interactions are handled is fundamentally hard.

OP_CAT isn't the only "disabled" opcode with those properties.... e.g. multiplying also does it.

When behavior like this is fixed via limits great care must be taken to make sure the limits are implemented absolutely consistently everywhere or the result is a consensus splitting risk. Alt full node implementers have repeatedly implemented the limits wrong— even when they're obvious in the code, called out in the comments, documented on the wiki, etc... even by just simply not implementing them (... coverage analysis and testing against the blockchain can't tell you about limits that you're just missing completely).

Going back to the OP's question. I'm not seeing how OP_CAT (at least by itself) facilitates any of the high level examples there. Can you give me a specific protocol and set of scripts to show me how it would work?
legendary
Activity: 1232
Merit: 1094
September 01, 2014, 04:39:27 PM
#15
I can't get it. If we could overflow the output of OP_ADD, why couldn't we do the same for OP_CAT?

OP_CAT allows exponential memory usage.

OP_DUP means duplicate the top item on the stack.

Assume the stack contains a single entry A (length 32 bytes)

Code:
Stack: A  (length=32)

OP_DUP

Stack: A A

OP_CAT

Stack: AA (length=64)

OP_DUP

Stack: AA AA

OP_CAT

Stack: AAAA (length=128)

OP_DUP

Stack: AAAA AAAA

OP_CAT

Stack: AAAAAAAA (length=256)

Each OP_DUP, OP_CAT call doubles the size of the stack.  After 10 calls, 32 bytes would become 32kB.  After 10 more, it would be 32MB and 10 more would be 32GB and so on (32TB after 10 more). 

An easy fix would be to limit the memory size of the inputs into OP_CAT.
newbie
Activity: 12
Merit: 0
September 01, 2014, 11:57:41 AM
#14
I surely know what's exponential growth

I was responding to:

I can't get it. If we could overflow the output of OP_ADD, why couldn't we do the same for OP_CAT?

...pointing out difference between the two overflows.

But the growth could be easily shut down by limiting the length of the output. For example, the script will stop and fail if the output of OP_CAT is bigger than 520bytes (which is also the upper limit of OP_PUSHDATA2)

We established that already:

Having a maximum output length for the concat would, I guess, solves it.

It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).
(my emphasis)

There is also at least one alternative to concatenation, one being:
And how about substr? One could specify the already concatenated string(s) and, if needed, numerical values for the separators.  That would do the job too.

Can you think of any alternative that doesn't require changing the software? (or any software what wouldn't require to be changed?)


legendary
Activity: 1792
Merit: 1122
September 01, 2014, 11:34:45 AM
#13
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together...
Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.

To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?


I can't get it. If we could overflow the output of OP_ADD, why couldn't we do the same for OP_CAT?

There is a big difference between overflowing a numerical value and overflowing memory.

Say you have an 8bit unsigned int, then (200+100) would overflow because the highest value is 255.  The result is then (200+100-256)=44, with a carry bit, or overflow.  In that context, it just means that you passed the maximum numerical value which resets to zero.

Concatenation of strings coupled with duplication would causes the memory (not a number) to overflow, and it could look like this:
abc; OP_DUP;
abc abc; OP_CAT; abcabc; OP_DUP;
abcabc abcabc; OP_CAT; abcabcabcabc; OP_DUP;
abcabcabcabc abcabcabcabc; OP_CAT; abcabcabcabcabcabcabcabc; OP_DUP;
abcabcabcabcabcabcabcabc abcabcabcabcabcabcabcabc; OP_CAT; abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc; OP_DUP;
...and after so long, that would cause a memory overflow, which would result in a CPU fart.

You may read www.redefiningthesacred.com/math4.html which illustrates the power of exponential growth.

I surely know what's exponential growth

But the growth could be easily shut down by limiting the length of the output. For example, the script will stop and fail if the output of OP_CAT is bigger than 520bytes (which is also the upper limit of OP_PUSHDATA2)
newbie
Activity: 12
Merit: 0
September 01, 2014, 11:06:23 AM
#12
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together...
Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.

To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?


I can't get it. If we could overflow the output of OP_ADD, why couldn't we do the same for OP_CAT?

There is a big difference between overflowing a numerical value and overflowing memory.

Say you have an 8bit unsigned int, then (200+100) would overflow because the highest value is 255.  The result is then (200+100-256)=44, with a carry bit, or overflow.  In that context, it just means that you passed the maximum numerical value which resets to zero.

Concatenation of strings coupled with duplication would causes the memory (not a number) to overflow, and it could look like this:
abc; OP_DUP;
abc abc; OP_CAT; abcabc; OP_DUP;
abcabc abcabc; OP_CAT; abcabcabcabc; OP_DUP;
abcabcabcabc abcabcabcabc; OP_CAT; abcabcabcabcabcabcabcabc; OP_DUP;
abcabcabcabcabcabcabcabc abcabcabcabcabcabcabcabc; OP_CAT; abcabcabcabcabcabcabcabcabcabcabcabcabcabcabcabc; OP_DUP;
...and after so long, that would cause a memory overflow, which would result in a CPU fart.

You may read www.redefiningthesacred.com/math4.html which illustrates the power of exponential growth.
newbie
Activity: 12
Merit: 0
September 01, 2014, 10:41:28 AM
#11
To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?

Touché. Actually, going for a new altcoin... worries me too, which is why I here to try figure is something else out there (something that already exists) would allow the type of conditional transaction validation I am talking about.

Having a maximum output length for the concat would, I guess, solves it.

And how about substr? One could specify the already concatenated string(s) and, if needed, numerical values for the separators.  That would do the job too.

Care to describe your protocol some? it turns out that a lot of things are possible with a bit of transformation.

That's the point.  Maybe bitcoin already has something implemented that allows for what I am trying to do.  Not only that, but what I am trying to do might have already been implemented and I just wasn't successful at finding it.


I have no hope in having any more opcodes enabled in the mainnet, and this threat is not about enabling it; it's about finding the tool(s) to achieve the goal.  Is there another net that allows not necessarily what I think I need (op_cat or op_substr) but at least tools that would allows the type verification I described?  I very much wish that the best answer is NOT an altcoin.
legendary
Activity: 1792
Merit: 1122
September 01, 2014, 09:57:27 AM
#10
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together...
Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.

To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?


I can't get it. If we could overflow the output of OP_ADD, why couldn't we do the same for OP_CAT?
legendary
Activity: 1260
Merit: 1019
September 01, 2014, 08:41:53 AM
#9
It is possible to set a limit of using OP_CAT. For example, two such operations per script.
I think that original poster can do it himself and play with testnet-in-a-box (or even bitcoin fork!) with extended capabilities.

Later he would share his results with bitcoin community and everyone will receive an answer for what OP_CAT should be included in mainnet.
staff
Activity: 4326
Merit: 8951
September 01, 2014, 07:56:22 AM
#8
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together...
Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.

To complete the lesson, for those who never liked homework: With a 201 cycle limit, OP_CAT lets you use approximately 534,773,760 YiB memory, vs 102510 bytes without it.

Quote
is unlikely to exhaust the memory.  And I agreed very much.
And maybe you will realize why all these altcoins worry me so?  Or perhaps you've got cheaper sources of ram than I do?
full member
Activity: 179
Merit: 157
-
September 01, 2014, 06:44:48 AM
#7
I'm trying to look into NXT... and I have a hard time.

Don't waste your energy. They claim to have solved various extremely difficult problems without even acknowledging that they are problems, they are closed-source and they pay a lot of shills. There is not and never has been any evidence of technical innovation from them.

It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).

They are enabled already on Testnet so you can try your experiments there.

I think Peter misunderstood the question he was answering and meant "nonstandard transactions are standard on Testnet". Either that or he was just wrong Smiley.

Quote
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together... is unlikely to exhaust the memory.  And I agreed very much.

Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.
newbie
Activity: 12
Merit: 0
August 31, 2014, 11:17:06 PM
#6
An altcoin make a technical change? Keep dreaming. Smiley  I am aware of none of them that have this.

Yay!  I like dreaming Smiley

I'm trying to look into NXT... and I have a hard time.

It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).

They are enabled already on Testnet so you can try your experiments there.

As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together... is unlikely to exhaust the memory.  And I agreed very much.

Care to describe your protocol some? it turns out that a lot of things are possible with a bit of transformation.


There are several scenarios:
Casino, dice, gambling:
Say a casino (dice type for this sake) where a secret is held by the house, while the hash of the secret is published in advance of the game.  The numbers "drawned" are determined by hashing the concatenation of a secret and [something else, like a sequence and/or a seed from the player, etc.] which can be verified by the player after the secret is published.

Now, let's say the house is waiting for its "lucky day" and decide that it's the day it cheats, willing to pay the cost of loosing its reputation.  Now the player may prove that the house cheated and may say "well, I can prove that I have been scammed", which doesn't  get him his money back.

Now, let's say the house made a transaction that would pay the player of proof that hash(secret+seq)!=drawnhash, then the player is guaranteed not to be scammed..... unless the house never reveals the secret.

Then the house may make another deposit, which requires the house to publish the secret to the blockchain before a deadline in order to get it refunded.  A transaction that pays the player with a locktime take cares of compensating the player if the secret is never released.

After the funds are deposited, the player may safely play with no possibility of being scammed.  If the house cheats or doesn't publish the secret, it is compensated.

Instant transaction:
Barely-trusted (i.e. untrusted) bank runs its service.

The account holder sends a deposit that is redeemable with:
(proof of bank's wrongdoing + accHolderSig) or
(proof of accHolder's wrongdoing + bankSig) or
(accHolderSig + bankSig)

The account holder signs a transaction, with a locktime that will allow the bank to redeem the funds in the future in case the account holder stops collaborating or disappear.

If the bank wanted to send funds to the account holder, the bank may do so by creating a new transaction that requires the client to declare something like:
"My last transacton was [transaction+amount...] at [sometime in the past]
Today is [date/time]
I collected $y from BankX and acknowledge that [my current balance is now $z
"

If we concatenate the parts and verify the signature, the bank could prove that the account holder is not being a gentlemen if he signed contradicting declarations such as "As of today, the last transaction was two days ago" and another message states that a transaction occurred yesterday.

The bank may take that withdrawal at any time, unless the account holder already claimed it legitimately.

The same applies the other way around.  The bank does not fulfill its obligation, the account holder remains unharmed.

And more...
Here, much can be done instantly outside the blockchain, while remaining totally safe for all parties.  If is yet interesting to be able to trust a protocol without having to worry about trusting the centralize bank itself.  Even though this example is about a centralized bank, a protocol can be made so that banks have account with each other... and collaborate in a way that allows inter-bank transactions... and finally a decentralized instant payment network for cryptocurrencies.  (did I say I like dreaming?)

Not only would that allow decentralized instant transaction, but it would also help keeps the blockchain smaller.

...and it seems to me that only OP_CAT is missing. And you know what?  I would argue that this actually IS a bit of transformation, even though it seems that getting it done in bitcoin is not at the door step.  But I am still dreaming Smiley

Yes, it also require the actual engines to run this on top of the cryptocurrency... and now I am looking for one over which we can actually build this.
staff
Activity: 4326
Merit: 8951
August 31, 2014, 05:50:41 PM
#5
An altcoin make a technical change? Keep dreaming. Smiley  I am aware of none of them that have this.

It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).

Care to describe your protocol some? it turns out that a lot of things are possible with a bit of transformation.
newbie
Activity: 12
Merit: 0
August 31, 2014, 03:09:15 PM
#4
What you talking about?
To the network verify if the transaction is valid, one will need to broadcast the data to the network, and it can't be made offline.

Or I misunderstood you?

Seems very much that you misunderstood.

I don't want to verify transactions offline.  I want to be able to validate (or not) a transaction based on whether a string validates to a predetermined hash, but that string must be concatenated, i.e. the need for OP_CAT, which is disabled in bitcoin even though I read it is enabled in testnet.  (e.g. hash(secret+phrase)=hardcodedstr)

Perhaps my question wasn't clear:
Among the multitude of cryptocurrencies that are out there, most of which are clone of bitcoin or litecoin, are there any that have the OP_CAT enabled or that provide a way to validate transaction based on a script that is able to concatenate inputs?
hero member
Activity: 662
Merit: 500
August 31, 2014, 01:29:43 PM
#3
OP_CAT has been disabled for a very long time.

Its good if u plz explain this OP_CAT thing. I think this concept is not related to Bitcoin !!!

You can find the details on the bitcoin wiki. https://en.bitcoin.it/wiki/Script
legendary
Activity: 1862
Merit: 1009
August 31, 2014, 01:22:34 PM
#2
What you talking about?
To the network verify if the transaction is valid, one will need to broadcast the data to the network, and it can't be made offline.

Or I misunderstood you?
newbie
Activity: 12
Merit: 0
August 31, 2014, 12:11:24 PM
#1
I want to be able to verify, with the use of secrets and signatures, that the depositor was honest throughout an offline process.  This seems to be achievable only if OP_CAT were enabled, and it unfortunately isn't in bitcoin.

Moreover, I have read that a non-standard transaction script may result in the transaction not being broadcasted by many nodes, namely pool miners, even if the transaction valid.

Among the countless cryptocurrencies that exist, most of which are clones, are there any that allows concatenation (perhaps even a substring function) in the transaction script?
Jump to:
© 2020, Bitcointalksearch.org