Author

Topic: [Schnorr] Should batched verification result in reduced weight per sig? (Read 290 times)

legendary
Activity: 3430
Merit: 3080
Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.


So I read this too quickly the first time, and I think I now see your point: one ratio for sig weight doesn't apply to all possible users given the differing contraints on their node. Signature aggregation improves the CPU constraint, but isn't the only consideration.
legendary
Activity: 3430
Merit: 3080
  • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction

So multiple concepts get confused here, so I can't tell exactly what you're talking about.    

There is signature aggregation which combines signatures from multiple inputs (but probably just one transaction) in to one

Cheesy s/transaction/input/


or efficient threshold signatures which allows many signers to produce a single signature for a single input.

Both make signatures in transactions much smaller, so don't justify any change in how weight is computed.

Multisig schnorr allows sig aggregation too, forgot about that.


  • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain
This directly makes transactions much smaller, so again, no need to change how weight works.

But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr.
Yes, it makes the cold cache catchup case spend half the time in signature validation. (non-catchup doesn't do validation in the critical path due to caching! ---  the small batching you can do as txn come in doesn't get much speedup)

Ahhhh, I didn't retain that from BIP-schnorr either, batched validation depends on a certain amount of cached signatures to work. I assumed that individual blocks were the unit of resolution at which batching would happen, as the batching performance graph shows 2-2.5x improvement at ~2500 transactions, which is a rough average of maximum transactions per block.


The eventual speedup from batching (and the speedup we achieved from caching in the non-catchup case) was part of the justification for having witness data have lower weight to begin with.

With the exception of batching the other advantages you cite already result  in lower weight (in the cross input case, much lower weight).  So they're naturally already awarded.

Sure, but my point is that although batching doesn't affect the total number of signatures to validate, it does incentivise the same thing that weight differentiation does (validation performance). Really, the other changes I cited don't alter the weight per sigop, simply the aggregate weight of  (of course this can make a huge difference to the total number of sigops per block)


Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.

That makes sense. I'm still interested in an argument why improving (future) IBD by up to 2x shouldn't lessen weight assignment per schnorr signature (regardless of how many signatures or script hashes are aggregated together). You're essentially saying that the witness discount was formulated to price any signature scheme, no matter it's validation performance?
legendary
Activity: 1042
Merit: 2805
Bitcoin and C♯ Enthusiast
    BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)[/li][/list]

    To be fair, that has nothing to do with Schnorr, the size is reduced by simply dropping the useless (in case of bitcoin) DER encoding. You can already drop the extra 6 to 8 bytes from every single signature that has been created in the past 10 years since they all tell you the same thing:
    - 1x DER-sequence tag: 0x30 (we already know it is 2x 32 byte integers)
    - 3x DER-length: telling us what we already know about the lengths (32 bytes)
    - 2x DER-int tag: 0x02 which we already know it is an integer (r and s)
    - possible upto 2x 0 byte appended to tell us these numbers are positive which again we already know
    staff
    Activity: 4284
    Merit: 8808
    • BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)
    NIT: 64 bytes instead of 72 bytes.

    Quote
    • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction

    So multiple concepts get confused here, so I can't tell exactly what you're talking about.    

    There is signature aggregation which combines signatures from multiple inputs (but probably just one transaction) in to one,  or efficient threshold signatures which allows many signers to produce a single signature for a single input.

    Both make signatures in transactions much smaller, so don't justify any change in how weight is computed.

    Quote
    • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain
    This directly makes transactions much smaller, so again, no need to change how weight works.

    Quote
    But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr.
    Yes, it makes the cold cache catchup case spend half the time in signature validation. (non-catchup doesn't do validation in the critical path due to caching! ---  the small batching you can do as txn come in doesn't get much speedup)

    Quote
    My question is: to incentivise the gains for the network, should schnorr sigs be assigned a lower weight than ECDSA sigs? It seems to make sense, given how much validation performance can be realised.
    The eventual speedup from batching (and the speedup we achieved from caching in the non-catchup case) was part of the justification for having witness data have lower weight to begin with.

    With the exception of batching the other advantages you cite already result  in lower weight (in the cross input case, much lower weight).  So they're naturally already awarded.

    Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these.  Because of this no single weight formula can be optimal.   What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.

    Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under some cost model... even a little better will get them made a default.  Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health.  This is important, because a perfect balance isn't possible.  E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be  good trade-off for those users who are bandwidth limited.   The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.
    legendary
    Activity: 3430
    Merit: 3080
    So the rationale for introducing transaction weight is to put a separate price on signature operations, to reflect the resources sigops use when running a fully validating node (i.e. a price component for block space and a price component for sigops when determining tx fee).

    Should this be reflected in the weight value assigned to transactions using schnorr sigs in the future?



    Using schnorr sigs already reduces the proportion of a tx comprising the signature:

    • BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)
    • Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction tx input
    • Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain

    All the above reduce the space that signatures use on chain, and sig-agg can reduce the number of sigops used drastically for transactions with multiple inputs.

    But batch verification works across an entire block of transactions, which would improve verification performance ~2x according to BIP-schnorr. That would make far more difference to validation performance than any of the points above, as it functions whether using sig-agg/taproot or not (and the 64kB size reduces space on chain, not sigops).


    My question is: to incentivise the gains for the network, should schnorr sigs be assigned a lower weight than ECDSA sigs? It seems to make sense, given how much validation performance can be realised.
    Jump to: