- BIP-schnorr defines a standardised 64kB size, smaller than the typical ECDSA sig size (71-72kB)
NIT: 64 bytes instead of 72 bytes.
- Schnorr permits signature aggregation, that treats the sum of >1 signature as a single valid signature for more than 1 transaction
So multiple concepts get confused here, so I can't tell exactly what you're talking about.
There is signature aggregation which combines signatures from multiple inputs (but probably just one transaction) in to one, or efficient threshold signatures which allows many signers to produce a single signature for a single input.
Both make signatures in transactions much smaller, so don't justify any change in how weight is computed.
- Taproot will allow conditional branches in more spending scripts to be collapsed into a Merkle root hash for all branches, so only the condition that is met is ever recorded on the blockchain
This directly makes transactions much smaller, so again, no need to change how weight works.
Yes, it makes the cold cache catchup case spend half the time in signature validation. (non-catchup doesn't do validation in the critical path due to caching! --- the small batching you can do as txn come in doesn't get much speedup)
My question is: to incentivise the gains for the network, should schnorr sigs be assigned a lower weight than ECDSA sigs? It seems to make sense, given how much validation performance can be realised.
The eventual speedup from batching (and the speedup we achieved from caching in the non-catchup case) was part of the justification for having witness data have lower weight to begin with.
With the exception of batching the other advantages you cite already result in lower weight (in the cross input case, much lower weight). So they're naturally already awarded.
Different users experience different pain points, some are cpu limited, some are bandwidth limited, some are power limited, some are storage limited. Many are some mixture of multiple of these. Because of this no single weight formula can be optimal. What really matters is that it sets the incentives in the right general direction, in order to break ties in the favour of public interest.
Generally we can assume that in the long run most users are going to do whatever is most cost effective for them. If foobar signatures were a LOT better for the network it would still be sufficient that they be only slightly better for the end user, even if making them much better would be justifiable under
some cost model... even a little better will get them made a default. Some users will have different motivations and make different choices, but a small number of exceptions is mostly irrelevant for the overall network health. This is important, because a perfect balance isn't possible. E.g. with weight, you could easily argue that an 8:1 ratio or a 16:1 ratio would have been better-- but a higher ratio means a LOT worse worst-case bandwidth, and so wouldn't be good trade-off for those users who are bandwidth limited. The fact that the only "direction of incentive" needs to be right, not so much the magnitude, means its possible to make compromises that give good results for everyone without screwing over some cost models.