Author

Topic: Fixed length loops for scripts (Read 773 times)

legendary
Activity: 1232
Merit: 1094
May 02, 2013, 04:02:15 AM
#7
The savings from that approach seem negligible.

Running a loop 100 times is slower than just multiplying all the maximum loop lengths in the script.  However, specifying a cost per operation and failing validation if it goes over the limit works too.
full member
Activity: 152
Merit: 100
May 01, 2013, 07:41:16 PM
#6
Sure, but under what circumstances would anyone care about the cost of a script which they wouldn't be running anyway?
It would allow a transaction to be dropped quickly without having to actually run the script.
The savings from that approach seem negligible. You still have to design the system to handle the worst case, which is a script just short enough to get past the filter. A malicious attacker would design the script to use the maximum amount of time without triggering the filter, so you have the cost of the script plus the cost of checking its complexity. The only case where you would save anything would be where someone accidentally created a script which requires too much time or memory, which should be rare (akin to creating malformed scripts which fail on valid input for other reasons).
legendary
Activity: 1232
Merit: 1094
May 01, 2013, 06:19:43 PM
#5
Sure, but under what circumstances would anyone care about the cost of a script which they wouldn't be running anyway?

It would allow a transaction to be dropped quickly without having to actually run the script.
full member
Activity: 152
Merit: 100
May 01, 2013, 03:31:56 PM
#4
However, that requires actually running the script.  This would mean you could just scan and look at the number before the OP_LOOP, so easier.
Sure, but under what circumstances would anyone care about the cost of a script which they wouldn't be running anyway?
legendary
Activity: 1232
Merit: 1094
May 01, 2013, 02:29:36 PM
#3
A simpler approach might be to simply assign a CPU and memory cost to each operation, and fail the script if it goes over preset limits. The precise costs and limits would need to be defined at the protocol level to ensure compatibility. The script language could the be opened up to permit arbitrarily complex processing (loops, subroutines, recursion, etc.) without worrying about complexity.

However, that requires actually running the script.  This would mean you could just scan and look at the number before the OP_LOOP, so easier.

They already have hard-coded values for all the operations.
full member
Activity: 152
Merit: 100
May 01, 2013, 01:20:23 PM
#2
A simpler approach might be to simply assign a CPU and memory cost to each operation, and fail the script if it goes over preset limits. The precise costs and limits would need to be defined at the protocol level to ensure compatibility. The script language could the be opened up to permit arbitrarily complex processing (loops, subroutines, recursion, etc.) without worrying about complexity.
legendary
Activity: 1232
Merit: 1094
May 01, 2013, 08:57:01 AM
#1
One of the reason not to allow loops is that it makes it harder to work out how much processing is required.

If loops had a clearly defined maximum number of loops, then this is not an issue.

For example, you could define loop as

[max loops] LOOP ..... [true/false] ENDLOOP

Loop would pop [max loops] off the stack.

ENDLOOP would go back to LOOP if the top of the stack contains true/1 and the maximum number of loops haven't been reached.

This would be a pay to multi-sig with loops. 

[Hash(pub-key1)] [Hash(pub-key1)] [Hash(pub-key1)] [3] LOOP [3] OP_PICK OP_HASH160 OP_EQUALVERIFY [5] OP_PICK [3] OP_PICK OP_CHECKSIGVERIFY [1] ENDLOOP

All sigops between LOOP and ENDLOOP would have their weight multiplied by 3.  If inner loops are allowed, then the inner loops would be the product of the maximum for all the loops involved.
Jump to: