Great ebook by Andreas!
Yep, the UTXO set is the essence of the blockchain, however, IBLT is solely concerned with zero-conf tx which want to get into the blockchain
.
i assume you mean high volumes of unconfirmed tx's. if so, how do we make the transition from the existing standard low volume method to IBLT?
Yes, and a proof of principle is already proposed: implementing it on top of the block relay service, which transmits lists of new-block tx hashes to subscribing nodes.
The reason is that right now the whole network could agree on the same 2000 unconfirmed tx, real-world business, but the next block mined can contain none of them. It could be full of previously unknown gambling dice-bot spam tx, a set which is 100% different. Because these tx validly spend UTXO, then the block is accepted, and the 2000 unconfirmed tx have to wait for the next block.
how is this possible? i thought the unconf tx sets differences were supposed to be currently quite low which is the reason for IBLT in the first place? (sounds like you're saying we, in fact, don't have enough volume to make IBLT practical as of today)
...
so how do miners know which unconf tx's are known vs unknown, ie, which to incl in the IBLT to enhance block acceptance ?
The unconf tx sets differences are quite low. Unknown tx's are those which have not been broadcast to the network, and are known only to the miner who might have got them direct from a spammer source (for a fee). I was only giving an example of how, under the existing paradigm, a new block can consist of secret or private tx, not previously broadcast. So, although there is good consensus on unconf tx, the consensus can be ignored. Volumes are currently not high enough to make IBLT a noticeabe improvement to what we have now.
how do you arrive at 500KB or 1MB? or are you just using the current 1MB block limitation of today into which you would fit the IBLT? so are you saying that a 1MB sized IBLT equals 1500 diffs or an estimated 1% difference in unconf tx sets across network nodes or an equivalent 150,000 tx's block?
I was just using the current limitation, and also noting that 1MB blocks are occasionally happening already. The success of any node decoding an IBLT is probabilistic. The smaller an IBLT is, the higher the probability of decode failure. So it makes sense to start at a largish, workable size which can support a decent number of differences, such as 1500. The final block size, written to disk, could be smaller than 1MB, and might normally be for a while, but the disk blocks still grow with the ecosystem volume. It might take many years to hit 150,000 tx per block, which would* max out the 1MB IBLT.
*likely, but not necessarily