Pages:
Author

Topic: Ultimate blockchain compression w/ trust-free lite nodes - page 11. (Read 87937 times)

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
My point was that for insertion/deletion, only the trie node being attached to/removed from requires an insertion/deletion in the leaves of its Merkle tree.  All of this node's parents simply have to update the value of one of the leaves in the Merkle trees, i.e. no insertions/deletions, and this requires only updating a single branch in each parent's Merkle tree.  The node being attached to/removed from does require an insertion/deletion in its Merkle tree, but this would usually happen lower down the trie, where nodes branch less, and Merkle tree rebalancing is faster.

Ahh, excellent point.   On that note, isn't it actually 15 hashes per full merkle tree of 256 nodes?  Because you need not only the straight branch from root to leaf, but also the brothers of each of those nodes so you can confirm.  Though, it's still dramatically less than sending all 256, and makes the bandwidth requirements of this structure much more pleasant.


Edit: You should name this beautiful thing, cause "hybrid PATRICIA/de la Brandais trie with authenticating Merkle trees" doesn't exactly roll off the tongue Smiley

How about the "Reiner Tree"?   Tongue  It's funny you asked that, because the other day my dad was asking me if I have any part of Bitcoin named after me, yet...
sr. member
Activity: 461
Merit: 251
That's a very good idea, as I've been concerned about how to reduce the number of hashes that need to be transferred for a lite-node to get its balance.  I had spent some time looking for "associative" hashing functions.  If they existed, the transfers would be tiny:  if a node is known to have Hash(A|B|C|D|E|F|G|H|I|J), and you want to prove inclusion of G, you simply supply {Hash(A|B|C|D|E|F), G, Hash(H|I|J)}.   So you would need to transfer at most 3 hashes per level.  Unfortunately, the only associative hash function I found was based on matrix multiplications, and was most definitely not cryptographically secure Sad
Funny, I was looking for an associative hashing function as well.  I gave up after thinking they would seem to defeat the purpose of Merkle trees altogether, and thus likely don't exist (yet?).

Also, this paper, 'Implementation of an Authenticated Dictionary with Skip Lists and Commutative Hashing' seems to offer an alternative solution than trees/tries: http://www.cs.brown.edu/cgc/stms/papers/discex2001.pdf

That's an interesting paper, though I have trouble envisioning how they could be made deterministic for the purposes of authenticated structures.  Maybe I just need to read the paper a little closer.  Unfortunately, it's late so I will have to come back to this, later.
To make it deterministic, assuming the hash of each node's element is uniformly distributed, this hash value could be used as a "random" number for determining the tower height.  Though, this provides an avenue for attack by users purposely building lots of "tall towers".  To avoid this, each UTxO could also get randomness from the Merkle root of the tx tree that the UTxO showed up in, something a non-miner attacker has no control over.  It might also be hard enough for even a miner to perform the attack, since he has only one lever to control many degrees of freedom with.  This is just off the cuff, though, so I'm sure there are better ways to do it, or perhaps I'm not understanding it correctly.

I think the thing in this paper is patented, though.

Edit: You should name this beautiful thing, cause "hybrid PATRICIA/de la Brandais trie with authenticating Merkle trees" doesn't exactly roll off the tongue Smiley
legendary
Activity: 1428
Merit: 1093
Core Armory Developer

I was thinking that to reduce the number of hashes a lightweight client would need to download when proving inclusion, you could replace the concatenation of the children's values in the formula for the value of branch node B with their Merkle root.  Then a lightweight client would only need log2(256) = 8 hashes per node to prove inclusion in the worst case scenario, instead of 256.

The downside is having to store more hashes (twice as many, worst case), but it actually appears to make updates faster, I guess due to fewer hashes needing to be accessed from memory.  This is because all node values updated during insertion/deletion/leaf update, except the one being attached to or removed from, have only a single leaf in the Merkle tree updated => no unbalancing issues, and only a single Merkle branch needs to be updated.

That's a very good idea, as I've been concerned about how to reduce the number of hashes that need to be transferred for a lite-node to get its balance.  I had spent some time looking for "associative" hashing functions.  If they existed, the transfers would be tiny:  if a node is known to have Hash(A|B|C|D|E|F|G|H|I|J), and you want to prove inclusion of G, you simply supply {Hash(A|B|C|D|E|F), G, Hash(H|I|J)}.   So you would need to transfer at most 3 hashes per level.  Unfortunately, the only associative hash function I found was based on matrix multiplications, and was most definitely not cryptographically secure Sad

So, I welcome the idea of per-node merkle trees.  I wonder if the trees really need to be stored, though.  Hashing is so fast that recomputing the merkle tree on-demand may be fine, especially for the sparse, lower-level nodes.  Of course, the top couple levels could/should be cached since they'll get hit all the time and are pretty dense.

However, I'm not sure if I agree/understand the point about updating only Merkle branches.  Because the linked-lists at each node in the tree are sorted, the deletion/insertion of a node is likely to occur in the middle of the list and reverse the parity/pairings -- i.e.  you started with {A,C, D,H, J,M, Q,Y} -- the bottom level pairs (A,C), (D,H), (J,M) and (Q,Y).  Now, you insert E and the list now looks like: {A,C  D,E, H,J, M,Q, Y,Y} which means that all branches to the right of the insertion or deletion need to be recalculated.  

On the other hand, you may be recomputing sparse nodes all the time, anyway (meaning this recomputation will happen regardless), and the dense higher-level nodes could be batch-updated -- you know that a given block is going to modify 77 of your 256 branches at the top level, so you don't need to recompute the tree until you have all 77 children are complete.


Also, this paper, 'Implementation of an Authenticated Dictionary with Skip Lists and Commutative Hashing' seems to offer an alternative solution than trees/tries: http://www.cs.brown.edu/cgc/stms/papers/discex2001.pdf

That's an interesting paper, though I have trouble envisioning how they could be made deterministic for the purposes of authenticated structures.  Maybe I just need to read the paper a little closer.  Unfortunately, it's late so I will have to come back to this, later.
sr. member
Activity: 461
Merit: 251

The "values" of each leaf is just the root of the sub tree, and the value of each branch is the skip-string concatenated with all its children's values.





I was thinking that to reduce the number of hashes a lightweight client would need to download when proving inclusion, you could replace the concatenation of the children's values in the formula for the value of branch node B with their Merkle root.  Then a lightweight client would only need log2(256) = 8 hashes 8 + (8 - 1) = 15 hashes per node to prove inclusion in the worst case scenario, instead of 256.

The downside is having to store more hashes (twice as many, worst case), but it actually appears to make updates faster, I guess due to fewer hashes needing to be accessed from memory.  This is because all node values updated during insertion/deletion/leaf update, except the one being attached to or removed from, have only a single leaf in the Merkle tree updated => no unbalancing issues, and only a single Merkle branch needs to be updated.

Also, this paper, 'Implementation of an Authenticated Dictionary with Skip Lists and Commutative Hashing' seems to offer an alternative solution than trees/tries: http://www.cs.brown.edu/cgc/stms/papers/discex2001.pdf
legendary
Activity: 1596
Merit: 1100
I have a temporary solution. Currently the satoshi client has hard code of historical block hash (currently at about height 190000). Could we also hard code all the unspent output to the satoshi client up to a certain block height?

Yes.

Quote
For each output, only the transaction id, output order, script, and the block height are needed.

You just need a hash of the UTXO set at a given height, really.  Then later UTXO sets may be provably traced to the checkpoint hash.

legendary
Activity: 1792
Merit: 1111
I have a temporary solution. Currently the satoshi client has hard code of historical block hash (currently at about height 190000). Could we also hard code all the unspent output to the satoshi client up to a certain block height? For each output, only the transaction id, output order, script, and the block height are needed. That would be all information needed for verifying any future transactions and spending the coins (block height is needed to calculate the priority. If these unspent outputs are deep enough in the chain, the block height may be omitted as well).

Since the users can verify the hard-coded data with the blockchain, they don't really need to trust the development team. If the hard-coded data is widely distributed and independently verified by many people, normal users may simply accept it as-is.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Again, the claim "this is not possible with BSTs" about impossibility of parallelism in b-trees is false. I wonder what is going on here?

I should've stated that differently -- it's not impossible to parallelize general BSTs, etc, but since we are dependent on the underlying structure of the BST (which is not a normal requirement of BSTs in other applications), insert order must be strictly adhered to, and thus each insert operation must be completed in order before the next can be applied.   By definition, that is a serial process.  Along with that, sub-branches of the BST are not independent, so it's more complicated to even have the storage space split up, since rebalance operations may shift nodes from one thread's storage space to another.  It's not an impossible task, it's just an unnecessarily complex one.

And I have a tough time believing that no one in the future will ever benefit from sub-tree independence:  with the BST, lite nodes can't even maintain their own little sub-trees for their addresses, because the structure of that subtree could change due to unrelated inserts nearby.  Or, the subtree could induce a rebalance itself, and change the root of the subtree that has to be tracked which may include other nodes it doesn't care about.

With the trie structure, sorted first by address then by OutPoint, a lite node can maintain sub-trees of each of its own addresses, insert and delete elements in any order, and roll back its subtree on reorgs without having to care about anyone else's OutPoints.  And all using simple, standard insert and delete ops, found in any textbook on the subject.

But, I have to agree that the first implementer of this stuff is likely to set this standard.  I was hoping to persuade the folks who are working on it now, that the trie structure is not only the simplest, but the most flexible and robust.  Apparently, I'm not succeeding.    I better get back to fixing Armory so that maybe one day soon I can work on this problem, too Smiley
legendary
Activity: 1072
Merit: 1181
The 0.48 GB does include all data ever removed from the txout set in fact, as only a fraction of the block data is actual txouts - most is signatures which never make it to the txout set. However, you are right that the undo data does depend on the forward block data, not for the data itself but for the txids being spent. Compared to data being removed, this is "local" data: to disconnect a block N you need the block data for N, and the undo data for N (and especially not the block data of the transactions that were spent). If the txids would be stored in the undo data as well (to make it fully standalone), it would be around 1.5 GB (yes, seriously, the majority of undo data is not the "values" of the UTXO map, but the "keys").

Still, I think we're moving away from the real discussion. We're not talking about today's storage requirements - if we were nothing complex would be needed, the UTXO set is only 130 MB, and is completely scanned in seconds. I do forefee the UTXO set to grow significantly the next years, and the problem is not how you store it: people should have the ability to store it in a small compact way, or a larger way, by choosing different priorities of cpu/bandwidth/storage.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)

I'm not sure what you mean. Sure you only needs O(log n) separate objects, while I need O(n) separate objects. However, in total you store way way more data. Currently the "forward" block data is 3.75 GB, and the undo data all the way back to the genesis block is 0.48 GB.

The way I understand it, this 0.48 GB of undo data is useless without part of the 3.75 GB of block data to go with it.  I have assumed this because in order to roll back a block, you need to replace elements of the UTXO set that have been cut out during forward motion, which clearly don't all fit in 0.48 GB.  And in order to roll a block back, you need to download orphaned data from a peer, data that is now no longer part of the block chain, isn't required storage for anybody, which nobody on the main network has if it originated from an attacker or isolated network, and that can't be assumed to be available from any given peer.

Meanwhile, while my snapshots likewise depend on external data to properly roll things forward, it is data that at least a) some nodes on the main network are guaranteed to have, and b) worthwhile downloading, since it is actual block chain data that is relevant to the network and worth storing and propagating.

The UTXO set itself, is 0.13 GB. So just 4 snapshots requires more storage than all that is needed to store the entire history, without O(n) operations to make the copies.

That assumes all four snapshots are the same size.  How big was the block chain around block 0x10000?  Tiny compared to today is my guess, its size is certainly a fraction of 0.13 GB.  The earlier snapshots are likely to always be much smaller than the later ones.

That also assumes that they can't be stored on disk without any sort of differential coding.  Surely that won't help much for storing the snapshots for block 0x8000 and block 0x10000, but the snapshots between blocks in close succession (especially when no rebalance has been done) are easily written as "here's a snapshot" and "here's a snapshot-diff" (and "snapshot-diff" can easily just be defined as the block itself, something the client is probably already saving)
legendary
Activity: 1072
Merit: 1181
My main thought when weighing this idea versus my "snapshots" idea, is that this idea is asking for a storage burden of little objects that grows O(n) with the block chain and only supports a reorg as long as the record goes, and the one I suggest is of bigger objects that grows only O(log n) and supports a reorg all the way back to the genesis block, without requiring development, support, testing, or specifying any new functionality or requirements for storing and transferring orphan blocks among peers.

I'm not sure what you mean. Sure you only needs O(log n) separate objects, while I need O(n) separate objects. However, in total you store way way more data. Currently the "forward" block data is 3.75 GB, and the undo data all the way back to the genesis block is 0.48 GB. The UTXO set itself, is 0.13 GB. So just 4 snapshots requires more storage than all that is needed to store the entire history, without O(n) operations to make the copies. Sorry, but full snapshots is out of the question in my opinion. Both explicit reverse-deltas (like I use) or multiple trees with subtree-sharing have significantly better complexity. And yes, sure it may be somewhat harder to develop, but if Bitcoin is still around when the UTXO set is several gigabytes large, we'll be very thankful that we don't need to make fast copies of it continuously.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
As casascius proposed, it only keeps the currently-active UTXO state, and no effort is done to keep older trees available by sharing subtrees. Keeping full snapshots of older trees is a bad idea in my opinion (and re-rolling the entire history even more so), but on-disk files with data to 'undo' the applications of blocks to the UTXO set are an easy solution. If you see blocks as patches to the UTXO structure, these undo files are the reverse patches. They are only necessary for reorganisations, and as that happens far less frequently than accessing the currently active tree state, they don't need super-fast access anyway (simple rule in optimizing for cache effects: if two data sets have different access patterns, don't mix them).

My main thought when weighing this idea versus my "snapshots" idea, is that this idea is asking for a storage burden of little objects that grows O(n) with the block chain and only supports a reorg as long as the record goes, and the one I suggest is of bigger objects that grows only O(log n) and supports a reorg all the way back to the genesis block, without requiring development, support, testing, or specifying any new functionality or requirements for storing and transferring orphan blocks among peers.

EDIT: another concern is that any rollback scheme that depends on the ability to fetch orphan blocks from peers is not only at risk of failing if those peers do not have those orphan blocks, but is also an avenue for attack.  If a node ends up bootstrapping from an isolated segment of the network that has a large number of orphan blocks (or is forced into this situation by an attacker), never seeing any full blocks or tree snapshots from the real network... and then suddenly connects to the "real" network, the real network will have no way to provide the orphan blocks needed to roll his tree out of fantasy land and back to earth.  He must start over from some point in the past, he has no choice.
legendary
Activity: 1072
Merit: 1181
Not sure how up-to-date you guys are with development of the reference client, but in the 0.8 validation engine ("ultraprune"), a first step towards ideas like proposed in this thread was taken. It may be interesting in this discussion.

We now do keep an explicit set of unspent transaction outputs, but 1) indexed by txid (as that is necessary for block validation) instead of by address, and 2) without explicit tree structure or authentication. Still, some of it is relevant.

As casascius proposed, it only keeps the currently-active UTXO state, and no effort is done to keep older trees available by sharing subtrees. Keeping full snapshots of older trees is a bad idea in my opinion (and re-rolling the entire history even more so), but on-disk files with data to 'undo' the applications of blocks to the UTXO set are an easy solution. If you see blocks as patches to the UTXO structure, these undo files are the reverse patches. They are only necessary for reorganisations, and as that happens far less frequently than accessing the currently active tree state, they don't need super-fast access anyway (simple rule in optimizing for cache effects: if two data sets have different access patterns, don't mix them).

If the block headers commit to (the hash of) these undo files as well as the merkle root of the current UTXO state, clients could ask their peers for backward movement data, which is enough to release a client stuck in a side chain without transferring the entire UTXO state. I see no reason for not doing this.

So, progressing to the full idea, the first step is adding an authentication/merkle structure on top of the UTXO state. Only a hash in the coinbase is necessary that commits to the current state and the undo data hash to move back to the previous block, is needed. In case of not fully deterministic data structures (like balanced trees, as opposed to tries/patricia trees), the undo data perhaps needs to be extended with structural undo data, to have enough information to roll back to the exact same previous structure.

The last step is extending this to an address-to-txid index, or something equivalent. I don't think this will be hard, if you already have everything above.

PS: giving miners the choice to either rebalance or not is a bad idea, imho, as it leads to increased variation (and worse: variation under control of miners) in block validation. Especially since balancing after one addition is typically O(log n), and a full rebalancing of the entire tree is O(n).
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
The problem here is that the full rebalancing operation requires everyone to run the rebalancing algorithm to even verify it was done correctly. This means it has to be optimized so that even weaker systems are able to do it. Otherwise, there's no point in including the simpler algorithm. However, if you do optimize it that way, then the point of having the simpler algorithm vanishes completely and the whole design ends up simpler by just having the full rebalance algorithm.


I can't imagine that the rebalancing algorithm is going to be costlier in CPU time than validating sets of ECDSA signatures on incoming blocks as is already required.

The most expensive operation in rebuilding the tree is SHA256 hashing.  We're doing quadrillions of these hashes network-wide every 10 minutes.  What's a few million more per node, once every 10 minutes?

I can see wanting to avoid making every miner node rebalance the tree in response to every incoming transaction (i.e. every roll of Satoshi Dice) - this would motivate miners to mine empty blocks, and the reason for having the simpler option.  But a possibility of doing a rebalance capped with a maximum of once per block sounds plenty realistic.
legendary
Activity: 2128
Merit: 1073
Otherwise, this discussion is about two different mechanisms to achieve the same end result.  My core argument is the the trie-based solution is much less complex overall, easier to implement and get right, and has numerous other benefits -- such as dead-simple rollbacks and the whole thing is parallelizable (different threads/CPUs/servers can maintain different sub-branches of a patricia tree, and report their results can easily be accumulated at the end -- this is not possible with BSTs).  If you want to contribute to this discussion, then please do.
Again, the claim "this is not possible with BSTs" about impossibility of parallelism in b-trees is false. I wonder what is going on here?
member
Activity: 97
Merit: 10
I will clarify.  For every block, given the set of transactions contained in that block, there are 2 potential hash values that are acceptable as the root hash.  One of them represents the tree with the transactions applied to them.  This case is checked first, because it's the least expensive for a client to do so.  The second one represents the tree after it has been completely rebalanced.  A client should have no problem determining which way it went simply by trying the first case, and then the second.

The problem here is that the full rebalancing operation requires everyone to run the rebalancing algorithm to even verify it was done correctly. This means it has to be optimized so that even weaker systems are able to do it. Otherwise, there's no point in including the simpler algorithm. However, if you do optimize it that way, then the point of having the simpler algorithm vanishes completely and the whole design ends up simpler by just having the full rebalance algorithm.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
This sort of a developer is dangerous to have developing Bitcoin's internal structures. I think it's better they stay away.
Which sort of developer?  The one who revels in complexity, as though complexity breeds integrity?  This guy is surely already busy on his first implementation of Bitcoin, in assembly language.  He'll be done by 2017, assuming the architecture he's developing for is still popular enough that people will be able to run it.

Or do you mean the one who walks away?  And this benefits bitcoin because the fewer clients, the better?

No, the developer you described clearly has no patience to test his code so that it works properly. We're better off without such developers.


The best piece of code is the one that implements a specification whose complexity and edge cases require no testing because they don't exist.  We're better off with such specifications.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
This sort of a developer is dangerous to have developing Bitcoin's internal structures. I think it's better they stay away.
Which sort of developer?  The one who revels in complexity, as though complexity breeds integrity?  This guy is surely already busy on his first implementation of Bitcoin, in assembly language.  He'll be done by 2017, assuming the architecture he's developing for is still popular enough that people will be able to run it.

Or do you mean the one who walks away?  And this benefits bitcoin because the fewer clients, the better?

No, the developer you described clearly has no patience to test his code so that it works properly. We're better off without such developers.


I'm not sure what you guys are talking about.  If you disagree with meta-chain at all, then state it as such.  

Otherwise, this discussion is about two different mechanisms to achieve the same end result.  My core argument is the the trie-based solution is much less complex overall, easier to implement and get right, and has numerous other benefits -- such as dead-simple rollbacks and the whole thing is parallelizable (different threads/CPUs/servers can maintain different sub-branches of a patricia tree, and report their results can easily be accumulated at the end -- this is not possible with BSTs).  If you want to contribute to this discussion, then please do.



vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
And as for the comment about "simulating numerous cases of rollbacks" -- This case is dramatically simpler with a patricia tree structure -- you just add and remove elements from the tree using standard insert & delete operations.  It doesn't get much simpler than that (besides maybe keeping around the last few blocks worth of deleted TxOuts, which is probably a few kB).  On the other hand, you may be talking about gigabytes of data to store "backups" or "snapshots" of the BST, just in order to accommodate the possibility of a rollback.  And how many copies do you need to store?  You can keep the last state of the tree, but what if there's a 2-block reorg?  Well, now you need two copies.  To handle arbitrary-sized rollbacks, you could really be thrashing your hard-drive, and in such a way that everything has changed while you were attempting to swap gigabytes of data around.

I don't believe the snapshots of the tree would be gigabytes.  I mentioned previously a simple scheme where each snapshot of the tree has a lifetime of 2^n blocks, where n is the number of binary zeroes the block height ends with.  So if the current block height is 0x12345, then you can expect to be storing the trees for 0x12344, 0x12340, 0x12300, 0x12200, 0x12000, 0x10000, and 0x0.  So you have a way to restore and roll forward any rollback simply by requesting blocks from peers as is already supported.

To address the specific case of "what about a 2-block reorg", at 0x12345, you'd use the backup at 0x12340 and move forward.

EDIT: Using 2^(n+1) might be better so that (for example) upon reaching block 0x20000, a two-block reorg does not require a fetch all the way back from 0x10000.

So, at 0x12345, using 2^(n+1) you would have: 0x12344, 0x12342, 0x12340, 0x12338, 0x12300, 0x12280, 0x12200, 0x12100, 0x12000, 0x11000, 0x10000, 0x8000, and 0x0.

At 0x20000 instead of just 0x10000 and 0x0, you would have: 0x1ffff, 0x1fffe, 0x1fffc, 0x1fff8, 0x1fff0, 0x1ffe0, 0x1ffc0, 0x1ff80, 0x1ff00, 0x1fe00, 0x1fc00, 0x1f800, 0x1f000, 0x1e000, 0x1c000, 0x18000, 0x10000, and 0x0.  This is sort of the scheme I had in mind when I originally scribbled down 2^n but before actually calculating it out made me realize I'd be storing far less than I was shooting for.
member
Activity: 97
Merit: 10
This sort of a developer is dangerous to have developing Bitcoin's internal structures. I think it's better they stay away.
Which sort of developer?  The one who revels in complexity, as though complexity breeds integrity?  This guy is surely already busy on his first implementation of Bitcoin, in assembly language.  He'll be done by 2017, assuming the architecture he's developing for is still popular enough that people will be able to run it.

Or do you mean the one who walks away?  And this benefits bitcoin because the fewer clients, the better?

No, the developer you described clearly has no patience to test his code so that it works properly. We're better off without such developers.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
Either way, the developer has to get into the implementation details of the data structure.  They have to understand it.  And really, neither structure is particularly complicated.  Perhaps some devs are more familiar with BSTs.  But to say that a miner "has the option" to rebalance -- that doesn't make sense.  Any rebalancing operation on a BST will change the root hash.  It must be determined from the start exactly when and how rebalance ops will happen.  Or else everyone gets different answers.

I will clarify.  For every block, given the set of transactions contained in that block, there are 2 potential hash values that are acceptable as the root hash.  One of them represents the tree with the transactions applied to them.  This case is checked first, because it's the least expensive for a client to do so.  The second one represents the tree after it has been completely rebalanced.  A client should have no problem determining which way it went simply by trying the first case, and then the second.

And as for the comment about "simulating numerous cases of rollbacks" -- This case is dramatically simpler with a patricia tree structure -- you just add and remove elements from the tree using standard insert & delete operations.  It doesn't get much simpler than that (besides maybe keeping around the last few blocks worth of deleted TxOuts, which is probably a few kB).  On the other hand, you may be talking about gigabytes of data to store "backups" or "snapshots" of the BST, just in order to accommodate the possibility of a rollback.  And how many copies do you need to store?  You can keep the last state of the tree, but what if there's a 2-block reorg?  Well, now you need two copies.  To handle arbitrary-sized rollbacks, you could really be thrashing your hard-drive, and in such a way that everything has changed while you were attempting to swap gigabytes of data around.

How do you envision rolling the tree back in the case where you have just determined that all of the blocks you have on hand are now invalid, and getting the now-correct state of the Patricia meta tree requires you to ask peers for orphaned blocks you don't have?  Must future Bitcoin implementations be required to keep orphan blocks on hand and serve them to peers to support the ability of others to roll their tree backwards?

In perspective, it's not the idea of Patricia or any other kind of tree I am having a problem with, it's the added complexity of supporting this sort of roll back that goes well beyond understanding a new kind of tree.
Pages:
Jump to: