Finally cleans up issues with signature TX malleability, makes fraud proofs viable for real SPV security and incidentally frees up some capacity breathing room for the near term (2-3MB per block), who could argue against it? Unless there is some truly objectionable security risk discovered it should be soft-forked in ASAP. A few niggles about 'cleanest' way to do that but hopefully that wont turn into too much slide-rule swinging.
One issue is that if the "effective max block size" with SW is 4 MB, then the maximum bandwidth that a full node will have to deal with is the same as if we had a hardfork to 4 MB blocks. With the current way that the network functions and is laid out, this might be too much bandwidth. Maybe this could be somewhat addressed with IBLT, weak blocks, and other tech, but that stuff doesn't exist yet.
I think that there's basically agreement that 2 MB would be safe, though.
So reduce the actual block limit to 500KByte? (effective max 2 MB).
4 MB effective is probably a tad too large for the current bandwidth tech. now but I'm skeptical how often it would be hit in the near term. It is a worst case assuming 1MB of TX data and maximum number of signature data associated (high number of multi-sig, etc) in a single block but needs to be tested out for security implications what effect such a nasty block would have on the system of course.
no need to guess or estimate. actual data are available.
take a look at Pieter's tweet:
https://twitter.com/pwuille/status/673710939678445571?s=091.75 for normal tx, more for others e.g. multisig. take into account that normal account represent more than 80% of the total.
SW = quadruple the cap to get a double throughput
Is this formula correct?
I don't think so. It seems to me that for fully validate a block you still have
to download txs + witness. Maybe you could do some fancy thing parallelizing
download streams but you still have to download all the data. maybe I am missing
something obvious, though.
IMHO SegWit will lower full node's storage requirement because you could prune
the witness part once you have validate the block (the exact timing ow wit prune
will depend on how the feature will be implemented). So yes it will somewhat alleviate
the burden of full node operators but only for one dimension, leaving untouched bandwidth.
I still don't have a clear idea on how sigwit will impact CPU and RAM usage.
That said the @pwuille formula just give you an idea on how much room we can do
on the txs part of the block as result of moving (witness) on a separate data structure.
AFAIU the size of the block under segwit will be ~ base_size (where you store txs)
plus (witness_size).
Nonetheless witness_size depend on the transaction type, hence the actual block size
depends on the kind of txs that will be included.
just to recap:
@pwuille's formula:
size = base_size * 4 + witness_size <4MB
@aj (Anthony Towns) on btc ml dev suggests that a more correct formula is a combinations of 2 constraints
(base_size + witness_size/4 <= 1MB) and (base_size < 1MB)
quoting the relevant part of @aj's email hopefully will give you an idea:
So if you have a 500B transaction and move 250B into the
witness, you're still using up 250B+250B/4 of the 1MB limit, rather than
just 250B of the 1MB limit.
In particular, if you use as many p2pkh transactions as possible, you'd
have 800kB of base data plus 800kB of witness data, and for a block
filled with 2-of-2 multisig p2sh transactions, you'd hit the limit at
670kB of base data and 1.33MB of witness data.
That would be 1.6MB and 2MB of total actual data if you hit the limits
with real transactions, so it's more like a 1.8x increase for real
transactions afaics, even with substantial use of multisig addresses.
The 4MB consensus limit could only be hit by having a single trivial
transaction using as little base data as possible, then a single huge
4MB witness. So people trying to abuse the system have 4x the blocksize
for 1 block's worth of fees, while people using it as intended only get
1.6x or 2x the blocksize... That seems kinda backwards.