Pages:
Author

Topic: Ultimate blockchain compression w/ trust-free lite nodes - page 9. (Read 87954 times)

legendary
Activity: 1232
Merit: 1094
Are transactions with 0 in 0 out allowed under the spec?

They sure are! The only thing a tx needs is one or more txin's and one or more txouts. Both scriptSigs and scriptPubKeys are allowed to be empty, and the value can be empty. (although you can't spend an empty scriptSig with an empty scriptPubKey; something needs to push a true value to the stack)

So, either use a previous "open" one of these transactions, or create a new one.  Are transactions handled sequentially in a block.

Can you spend an output of transaction 1 in transaction 7?  If so, then the miner could add a zero transaction as one of the coinbase outputs.

Also, if the number of normal transactions was a power of 2, then the proof gets much less complex.

Assuming 4 "real" transaction and the 5th one as M.

Under the Merkle rules, you expand the number of transactions to a power of 2.

A B C D M M M M

A - D are normal

M contains the merge minded signature.

To prove M is in the block, you just need to provide Merkle-Hash(A, B, C, D) and M.

The entire RHS of the tree is obtained by hashing M once for each level in the tree with the merkle rule.  This can be performed pretty fast.

I think the special transaction should have a magic number.  This would make it slightly larger, but if you added a 32 byte random "official" magic number, then it is very unlikely that it would happen accidentally.
legendary
Activity: 1232
Merit: 1094
If it looks like the sparse, lower-level nodes have 256 pointers, you're looking at the wrong picture (or I didn't explain it well enough).  The Brandais part is where the list of pointers is compressed to a linked list and thus there is only as many pointers as there are nodes (well, a little more, for the linked-list forward-pointers).

I think we are discussing different trees.

I was effectively suggesting an improvement on the brandais modification, use a hashmap instead of a linked list.

Basically, have a power of 2 array of pointers.

Then you can see if there is a match with

Code:
Child c = arr[keyByte & (length-1)]

if (c.getKeyByte() == keyByte) {
  // child matches
}

If not, then it is a hash collision and you need to double the size of the array and re-hash.

The array could be reduced in size on removal, say when it drops below 25%.

Quote
I don't think a hashmap works for this.  I guess it depends on the kind of hashmap you're talking about -- but if it's a "simple" one where there are lots of collisions, you end up with some non-determinism based on insert order, and removing elements is complicated.  But maybe I don't totally understand what you're proposing.

The only non-determinism is that some insertions will be instant (no collision) and some will require re-hashing.  This could slow things down.

If that is an issue you could spread the re-hash out over many insertion and deletions.  So, when you insert a key, it might does 2 re-hash steps per level, but is bounded.

Also, removal would need a rule for when to shrink the array.

Maybe, increase when a collision occurs and decrease if < 25% full.  However, that could oscillate.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
If it looks like the sparse, lower-level nodes have 256 pointers, you're looking at the wrong picture (or I didn't explain it well enough).  The Brandais part is where the list of pointers is compressed to a linked list and thus there is only as many pointers as there are nodes (well, a little more, for the linked-list forward-pointers).  This does increase search time to have to forward-traverse the linked list at each node, but they will be ordered, which means that it can be further optimized, and those kinds of lookups should be fast (in fact, they can be easily stored as simple lists, replacing the list on every update, since it's probably just as fast to do that with disk-based DB accesses).  The important part is that if a parent has 5 children, those 5 children are in lexicographical order, and only their 5 "fingerprints" will be used in the computation of the parent's "fingerprint."

I don't think a hashmap works for this.  I guess it depends on the kind of hashmap you're talking about -- but if it's a "simple" one where there are lots of collisions, you end up with some non-determinism based on insert order, and removing elements is complicated.  But maybe I don't totally understand what you're proposing.
legendary
Activity: 1232
Merit: 1094
I was thinking about proving that the alt chain is correct for the light weight clients.

There could be an additional "falsification" link where you show that a link was not valid.

A falsification node basically jumps the chain back to before that node.

A -> B -> C -> D -> False(C)

Chain becomes

A -> B -> C -> D -> False(C) -> C* -> D*

The false node wouldn't require any difficulty.  Since C and D both met the difficulty requirements, this doesn't actually cause spam.  Also, the false node should ideally propagate as fast as possible and not be mined in.

The false node could be of the format

Hash of last node
Hash of false node
Hash of proof

The proof could be an "attachment" but isn't used by C* when forming the "parent node" hash.  That way light clients can download the headers only.

Light clients would just make sure that the alt chain forms a chain.  False links don't have to be merge mined, since they can be directly proven.  The link would incorporate all the data required to

Light nodes could do random checking as they download the chain.  If each checker checks 0.1% of the updates, then with 1000 users, most historical errors would be detected.  Nodes could also check 1% of all new updates.

Also, light nodes could check new links at random for the same reason.

The "expanded" data for the node would include all txs added and removed for that node.  A light node could then check that this causes the update as claimed.

I think the tree should include the block in which the TXO was created as part of its "hash".

The "signature" of an UTXO would be {hash(tx),var_int(block number}, var_int(transaction number), var_int(output number)}.  This would add 5 extra bytes to the 32 byte hash.

This allows the light node to query the main bitcoin chain to check the transaction inputs.

I wonder if an all light mode network would be possible under this system.  Each node would only have to store some of the bitcoin blockchain data, as long as the total storage capacity was large enough to store it (with a large margin).

Random checking would pick up any double spend attempts very quickly.
legendary
Activity: 1232
Merit: 1094
As for the comment about node overhead:  I couldnt' follow your math (looks like you were missing 256-way trienodes with binary trie nodes),

I was just looking at the diagram Smiley.  It shows that each node has an full sized array.  With a binary tree, that is just 2 pointers so almost no cost.

For a 256-way tree, this means that the 2nd to lowest nodes (which would be sparse) would be 256 element arrays and only have a small number of children.  So, the cost per child is quite high.

With the way I suggested it, you get the benefits of a 256 way tree, but still have small nodes.  That is the different between 256 pointer lookups vs 32 pointer lookups.

The size of a binary tree is approx double the number of children, which isn't that much larger than a 256 way one.

Under a HashMap scheme, the toplevel nodes would have more than 128 children, so would all end up as a flat array. 

Quote
I need to think a little harder about the benefits of using a binary-PATRICIA tree... it pretty much removes the necessity for the Brandais aspect of it (compressing node children into linked lists),

As I said, hashmaps should be used instead of the linked lists.

Quote
Unfortunately, I have too many super-high priority things on Armory in the next month or two

Yeah, days need more than 24 hours.

I think the easiest is to define the merkle tree in a way that is easiest to visualize and leave implementation details until later.  Ofc, that assumes the merkle structure doesn't cause implementation problems.

If you defined the merkle hash of a node with only child as equal to that child's hash, then you can just define the tree as a full depth tree and leave pruning to the implementation.

Hash(leaf-node) = leaf's key

Hash(parent with one child) = Hash(child)

Hash(parent with 2 children) = Merkle(Hash(child1), Hash(child2))
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
The benefit of using PATRICIA/Brandais hybrid is that it's a "standard" datastructure.  And it's one that seems to fit this use case pretty perfectly.  I just followed you pseudocode and I think you are basically proposing the same thing as the PATRICIA tree, but using implicit skip strings, instead of explicit ones.  It can be reasonably argued that the skip strings are not necessary to authenticate the structure, but I think you are introducing extra complexity into the algorithms for inserting and deleting nodes, in exchange for simplifying authentication.  It's very difficult to describe without a well-thought-out example, but you can probably see it yourself if you try to work out the logic for worst-case-logic-complexity of an insert or delete operation.  It involves uncompressing, inserting/removing nodes, recompressing on both sides, and then rearranging a ton of pointers.  If the skip strings are not attached to the nodes, you have to recompute it from the children which involves a lot of string compares, and probably a bit slower.  I had to program the insert function for a patricia tree in as a homework assignment in college, once.  It was a mess... and kinda fun Smiley

As for the comment about node overhead:  I couldnt' follow your math (looks like you were missing 256-way trienodes with binary trie nodes), but I think something you're overlooking is the linked-lists for the pointers at each node (assuming some power of 2 branching factor more than 2^1).  This means that nodes with only one child, only store one pointer.   It is wasteful for the top levels where you have linked-list overhead for super-dense nodes, but the majority of data is near the leaves where the optimization saves you a ton of space.  Plus, because the top nodes are constantly changing, you can optimize them in code pretty easily to be more pleasant for both query-time and space-consumption.

I need to think a little harder about the benefits of using a binary-PATRICIA tree... it pretty much removes the necessity for the Brandais aspect of it (compressing node children into linked lists), and certainly adds a bit of compute-efficiency to it at the expense and taking more space (I think the binary trie will be faster at updating the authentication data, but will have more intermediate/branch nodes to be stored).

Unfortunately, I have too many super-high priority things on Armory in the next month or two, so there's no way I can get around to playing with this, yet.  However, I may eventually be doing some kind of electrum-style version Armory, in which I will split Armory both ways -- into an Armory supernode and Armory litenode.  The supernode will do something like we're discussing here, and the lite nodes will depend on having access to a trusted supernode.  It sounds like a good time to prototype this idea...
legendary
Activity: 1232
Merit: 1094
Not much time to respond now, but I wanted to point out the the PATRICIA tree concept has no balancing issues -- if it looks like it does, it's strictly an illusion.

Right.  Also, since the keys are crypt hashes, it is even hard to push the tree much deeper than the expected depth, even if you want to.

Quote
Re: skip strings:  there's no way you can use a basic trie for this -- the space overhead of all the intermediate (and unnecessary) nodes would overwhelm the data that is being stored.

The skip strings are over complex, since everything is "random".

I was thinking of having a "leaf" type node.  The skip system is just over complex.

A parent node has a pointers to its children.

A leaf node has the 256 bit hash.

If you have a 256 pointer array for every node, then you have to many of them.

The bottom level of parent nodes will likely have a small number of children.  If you implement it as a 256 pointer array, then for each leaf, you are adding 128 unneeded pointers (128 to each of the 2).  On a 32-bit machine, that is 128 * 4 = 512 bytes.

You data is a 32 byte hash, so the overhead is massive.

However, that assumes that the 2nd to bottom nodes only have 2 sub-nodes.  Not sure what the actual odds are.

An more efficient structure would be.

Code:
TreeNode {
    private final int level; // the depth of this node
    private final byte key; // the key byte = hash[level]

    public boolean matchesKey(byte[] hash) {
        return key[level] == this.key;
    }
}

ParentNode extends TreeNode {

    private TreeNode[] children;  // power of 2 array of children

    public TreeNode getMatchingChild(byte[] hash) {
        TreeNode child = children[hash[level + 1] % children.length];
        if (child == null) {
            return null;
        }
        return child.matchesKey(hash) ? child : null;
    }

}

LeafNode extends TreeNode {

    byte[] data = new byte[32];

}

To add a node you use root.getMatchingChild() until null is returned.  Then you have to add the hash to the last non-null Node.

Having said that, that is the data structure.

The merkle tree structure would also need to be specified.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Not much time to respond now, but I wanted to point out the the PATRICIA tree concept has no balancing issues -- if it looks like it does, it's strictly an illusion.  The reason is this:

Take a standard, non-level-compressed trie, as shown in my previous post.  There is no question that this is a O(1) data structure:  if you use 256-way trie, it always takes exactly 20 hops to get from root to the leaf of a 20-byte address string.   It has O(1) query, O(1) insertion, O(1) deletion.

All operations on a PATRICIA tree are strictly less than the equivalent implementation of a trie.  Basically, the upper bound of computation time for any PATRICIA tree operation is that of the non-level-compressed trie.  It's just that the number of  hops that you shortcut as a benefit of level-compression, is variable depending on the data in the tree/trie.  The amount of operations you get to skip can be altered by an "attacker", but the worst they can do to you is require the performance of a trie -- which is O(1).

Re: skip strings:  there's no way you can use a basic trie for this -- the space overhead of all the intermediate (and unnecessary) nodes would overwhelm the data that is being stored.  In fact, even with level compression and linked-list nodes, I'm still very concerned about the space overhead -- I suspect we may be storing something like 3x the size of the underlying UTXO set just to store this DB.  I just pulled that number out of nowhere (3x), because I haven't rigorously explored it for various branching factors ... but it was one of the downsides of the trie-based structures compared to the BSTs.

legendary
Activity: 1120
Merit: 1164
Quote
In a radix tree what you would do is first add each transaction to the tree, keeping track of the set of all modified nodes. Then once you had finished the update you would recalculate the hashes for each changed node, deepest first, in a single O(n) operation.

O(NLog(N)) ?

Each node may cause an update to a full path to the bottom of the tree.

No actually. Every operation in a radix tree takes O(k) time, where k is the maximum length of the key in the set. Since Bitcoin transactions have a fixed key length of 256 bits, that's O(1) time. Additionally since the number of intermediate nodes created for a transaction in the tree can't be more than k, a radix tree is O(k*n) space; again O(n) space for Bitcoin.

I mean, it's not so much that you are wrong, it's just that the log2(n) part is bounded by a fixed small number so it really is appropriate to just say O(n), and as I explained, updating a batch of n transactions, especially given that n << N (where N is the total size of the txout set) is an efficient operation. Note that the n in O(n) is the number of new transactions, not the number of existing ones.

It is 2^n work, not n^2, I assume a typo?

Oops, good catch.

Quote
Yeah, until this is a hard and fast network rule you'll have to check that the hashing power devoted to these UTXO proofs is sufficient.

I have to look up the details of merged mining, but I think you get the same protection as the main chain (at least for the blocks where it works)?

For determining if a block in an alt-chain can be reversed you could be correct under some rules, but in this case each valid PoW is really more like a vote that the UTXO set mined by that PoW is valid. Thus the protection is only the hashing power that mined the blocks containing the PoW's.

Quote
Speaking of... we should have every one of these UTXO things have a reference to the previous valid UTXO set.

Absolutely.  I assumed that was the plan.

You'd hope so, but I'll have to admit I somehow didn't realize that until today.

Right, a general way to start new chains would be a good idea.  There would need to be some way to "register" a genesis hash and then that chain would only be used for 1 purpose.

The key is keeping agreement on the rules for the parallel chains.

It's tricky though, because the PoW can mean totally different things for different types of chains. Not to mention how for any application but timestamping you need to write a whole set of chain rules too.

That said, following a single merge-mining standard is a good thing; I only proposed this new one because as far as I know namecoin is the only one using the existing multi-chain standard, and that standard sucks. However:




Quote
scriptSig: <32-byte digest>
scriptPubKey:

Are transactions with 0 in 0 out allowed under the spec?

They sure are! The only thing a tx needs is one or more txin's and one or more txouts. Both scriptSigs and scriptPubKeys are allowed to be empty, and the value can be empty. (although you can't spend an empty scriptSig with an empty scriptPubKey; something needs to push a true value to the stack)

So, this basically creates a chain that has been stamped over and over with the output of one tx being the input to the next one.

What happens if you have something like

Root -> none -> none -> Valid1 (from root) -> Valid2 (from valid1) -> Invalid1 (from valid2) -> stamped (from Invalid2) -> ....

Not quite. The txin is just there because a Bitcoin transaction is only valid if it has a txin; what is actually in the txin is totally irrelevant. The link to the previous "considered good" block has to be within the UTXO header and that nLockTime would best be used only as an auxillary bit of data to allow nodes to reproduce the UTXO chain block header after they deterministicly compute what the state of the UTXO tree would have been with that blocks transactions included. It's just a way of avoiding the pain of implementing the P2P network that really should be holding that data, and getting something working sooner. It's a solution that uniquely applies to a UTXO merge-mined alt-chain; no other type of chain would allow a trick like that.
legendary
Activity: 1232
Merit: 1094
Take your example of adding a million transactions, which is unrealistic anyway as a 1MiB block can't have more than a few thousand.

You could do them over multiple blocks.  However, the max depth of the tree is only 256 levels, so you can't actually kill the receiving node and getting a hash match would be almost impossible anyway.

Quote
In a radix tree what you would do is first add each transaction to the tree, keeping track of the set of all modified nodes. Then once you had finished the update you would recalculate the hashes for each changed node, deepest first, in a single O(n) operation.

O(NLog(N)) ?

Each node may cause an update to a full path to the bottom of the tree.

Quote
Anyway, thinking about this a bit more, on reflection you are right and I think this unbalanced tree stuff is a total non-issue in any radix tree or similar structure where depth is a function of common-prefix length. For the 1.6million tx's in the UTXO set you would expect a tree 20 branches deep. On the other hand to find a transaction with n bits in common with another transaction requires n^2 work, so right there you aren't going to see depths more than 64 or so even with highly unrealistic amounts of effort thrown at the problem. (bitcoin has done a 66bit zero-prefix hash IIRC)

It is 2^n work, not n^2, I assume a typo?

Quote
The issue with long depths is purely proof size, and 64*n isn't much worse than 20*n, so as you suggest, why make things complicated?

Right, and it only affects that transaction anyway.

Quote
Yeah, until this is a hard and fast network rule you'll have to check that the hashing power devoted to these UTXO proofs is sufficient.

I have to look up the details of merged mining, but I think you get the same protection as the main chain (at least for the blocks where it works)?

Quote
Speaking of... we should have every one of these UTXO things have a reference to the previous valid UTXO set.

Absolutely.  I assumed that was the plan.

Quote
Equally if we think in terms of merge-mining a chain, adding support for additional UTXO indexes, such as known scriptPubKeys and so forth, is just a matter of adding additional chains to be merge mined, and UTXO-only and UTXO+scriptPubKey can co-exist just fine in this scenario. A disadvantage is we'll need to add some stuff to the P2P network, but I have another idea...

Right, a general way to start new chains would be a good idea.  There would need to be some way to "register" a genesis hash and then that chain would only be used for 1 purpose. 

The key is keeping agreement on the rules for the parallel chains.

Quote
So right now, merge-mined chains have the problem that the proof of work goes through the coinbase transaction.

Yeah, need to look up the process.

Quote
The issue here is that to prove a path to the proof of work, you need the whole coinbase transaction, and it can be quite large, for instance in the case of P2Pool or Eligius. So I'm proposing a new standard, where one transaction of the following form is used to include an additional merge-mined digest, and that transaction will always contain exactly one txin, and one txout of value zero using the following:

scriptSig: <32-byte digest>
scriptPubKey:

Are transactions with 0 in 0 out allowed under the spec?

Quote
Any digest matching this form will be assumed to represent the miner's hashing power, thus miners should not allow such transactions into their blocks blindly. They are currently non-standard, so this will not happen by default, and the scriptSig has no legitimate use. The scriptPubKey is spendable by the scriptSig, so for the txin miners would usually use the txout created by a previous block following this standard. If none are available the miner can insert an additional zero-fee transaction creating a suitable txout (of zero value!) in the block.

So, this basically creates a chain that has been stamped over and over with the output of one tx being the input to the next one.

What happens if you have something like

Root -> none -> none -> Valid1 (from root) -> Valid2 (from valid1) -> Invalid1 (from valid2) -> stamped (from Invalid2) -> ....

There is no way to remove the stamped one.

Better would be including



- Anyway need to read up on merged mining Smiley -

Quote
The digest would of course represent the tip of a merkle tree. Every merge mined digest in that tree will have a zero byte appended, and then the digests will be hashed together. What would go in the alt-chain is then the merkle path, that is every leaf digest required to get to the transaction, and what side the leafs were on. Note how this is different from the merge-mining standard currently used by namecoin, and fixes the issues it has with conflicting slot numbers.

Appending the zero byte is critical, because that action means to verify the an alt-chain block hash was legitimately merge-mined you simply check that every other digest in the path to the transaction has exactly 32 bytes, and that the transaction itself follows the above form. Note this also means the miner has the flexibility to use something other than a merkle tree to combine the alt-chain block hashes if they want; alt-chains should put reasonable limits of PoW size.

Alt-chains should also still support the same idea in the coinbase, for miners that don't plan on making large coinbase transactions. (the majority)

Now, the useful part, is we can easily add more than one of these transactions, and distinguish them by different sequence numbers in the txin. You would do that for the UTXO set, with one value of defined for just the UTXO set itself, another for a scriptPubKey index, and whatever else gets added as the system is improved. The previous block where this miner considered the merge-mined UTXO digest to be valid can be specified with nLockTime. The advantage of this idea is that because the final UTXO digests are completely deterministic we don't need to build a new P2P network to pass around the digests in the merkle path to the coinbase.

This also shows the advantage of using a separate transaction, because it keeps the UTXO proof size to a minimum, important for applications like fidelity bonded banks where UTXO proofs would become part of fraud proofs; we need to keep proof size to a minimum. Equally being able to prove the hashing power devoted to the UTXO set merge-mine chain is useful, and the only way to do that is provide a bunch of recent UTXO block headers and associated PoW's. Again, keeping the size down for this is desirable as the SPV node examining those headers may be on a low-bandwidth connection.
legendary
Activity: 1120
Merit: 1164
This requires the tree to be completely regenerated for each block, since the hash of every transaction would change.

Read the rest of the message - that's just a toy example to illustrate the general concept. My worked example does not require the whole tree to be recalculated nor does it require the block number for lookup.

The attack on the system I suggested would require having lots of transactions that has the same initial bits in their hash.

If you had 1 million of them and added them to the tree, then it would become quite deep.  Each time you add one, the nodes would have to recalculate the entire path from root to that node and redo all the hashing.  The effort per node would be proportional to N squared, where N is the number of collisions.

I think your misunderstanding stems from the idea that the datastructure and the authenticating hash are the same thing. They aren't, they're quite separate. The authenticating hash is an addition to the datastructure, and can be calculated independently.

Take your example of adding a million transactions, which is unrealistic anyway as a 1MiB block can't have more than a few thousand. In a radix tree what you would do is first add each transaction to the tree, keeping track of the set of all modified nodes. Then once you had finished the update you would recalculate the hashes for each changed node, deepest first, in a single O(n) operation.

A real Bitcoin node would basically have a limit on how much CPU time it was willing to use on adding transactions to the UTXO set, and collect incoming transactions into batches small enough to stay under that limit. Given we're only talking tens of updates/second chances are this optimization wouldn't even be required anyway.


Anyway, thinking about this a bit more, on reflection you are right and I think this unbalanced tree stuff is a total non-issue in any radix tree or similar structure where depth is a function of common-prefix length. For the 1.6million tx's in the UTXO set you would expect a tree 20 branches deep. On the other hand to find a transaction with n bits in common with another transaction requires n^2 work, so right there you aren't going to see depths more than 64 or so even with highly unrealistic amounts of effort thrown at the problem. (bitcoin has done a 66bit zero-prefix hash IIRC)

The issue with long depths is purely proof size, and 64*n isn't much worse than 20*n, so as you suggest, why make things complicated?

Too bad, it was a clever idea. Tongue


Quote
If you really need to prove a tx was in the UTXO set as of the current best block, either wait for another block to be generated, or prove that it isn't in the UTXO set of the previous block, and prove that it is in the merkle tree of the current best block.

You will probably have to do something like that anyway.  The plan was to merge mine, so some bitcoin blocks won't have matching tree root node hashes.  So, prove it wasn't spent up to 10 blocks previous and then check the last 10 blocks manually.

Yeah, until this is a hard and fast network rule you'll have to check that the hashing power devoted to these UTXO proofs is sufficient.

Speaking of... we should have every one of these UTXO things have a reference to the previous valid UTXO set. This would turn the whole shebang into a proper chain and thus allow clients to figure out both which is the longest chain, and how much hashing power is being devoted to it.

Equally if we think in terms of merge-mining a chain, adding support for additional UTXO indexes, such as known scriptPubKeys and so forth, is just a matter of adding additional chains to be merge mined, and UTXO-only and UTXO+scriptPubKey can co-exist just fine in this scenario. A disadvantage is we'll need to add some stuff to the P2P network, but I have another idea...

So right now, merge-mined chains have the problem that the proof of work goes through the coinbase transaction. The issue here is that to prove a path to the proof of work, you need the whole coinbase transaction, and it can be quite large, for instance in the case of P2Pool or Eligius. So I'm proposing a new standard, where one transaction of the following form is used to include an additional merge-mined digest, and that transaction will always contain exactly one txin, and one txout of value zero using the following:

scriptSig: <32-byte digest>
scriptPubKey:

Any digest matching this form will be assumed to represent the miner's hashing power, thus miners should not allow such transactions into their blocks blindly. They are currently non-standard, so this will not happen by default, and the scriptSig has no legitimate use. The scriptPubKey is spendable by the scriptSig, so for the txin miners would usually use the txout created by a previous block following this standard. If none are available the miner can insert an additional zero-fee transaction creating a suitable txout (of zero value!) in the block.

The digest would of course represent the tip of a merkle tree. Every merge mined digest in that tree will have a zero byte appended, and then the digests will be hashed together. What would go in the alt-chain is then the merkle path, that is every leaf digest required to get to the transaction, and what side the leafs were on. Note how this is different from the merge-mining standard currently used by namecoin, and fixes the issues it has with conflicting slot numbers.

Appending the zero byte is critical, because that action means to verify the an alt-chain block hash was legitimately merge-mined you simply check that every other digest in the path to the transaction has exactly 32 bytes, and that the transaction itself follows the above form. Note this also means the miner has the flexibility to use something other than a merkle tree to combine the alt-chain block hashes if they want; alt-chains should put reasonable limits of PoW size.

Alt-chains should also still support the same idea in the coinbase, for miners that don't plan on making large coinbase transactions. (the majority)

Now, the useful part, is we can easily add more than one of these transactions, and distinguish them by different sequence numbers in the txin. You would do that for the UTXO set, with one value of defined for just the UTXO set itself, another for a scriptPubKey index, and whatever else gets added as the system is improved. The previous block where this miner considered the merge-mined UTXO digest to be valid can be specified with nLockTime. The advantage of this idea is that because the final UTXO digests are completely deterministic we don't need to build a new P2P network to pass around the digests in the merkle path to the coinbase.

This also shows the advantage of using a separate transaction, because it keeps the UTXO proof size to a minimum, important for applications like fidelity bonded banks where UTXO proofs would become part of fraud proofs; we need to keep proof size to a minimum. Equally being able to prove the hashing power devoted to the UTXO set merge-mine chain is useful, and the only way to do that is provide a bunch of recent UTXO block headers and associated PoW's. Again, keeping the size down for this is desirable as the SPV node examining those headers may be on a low-bandwidth connection.
legendary
Activity: 1232
Merit: 1094
Actually, here is an idea that could work: rather than making the generated UTXO set be the state of the UTXO tree for the current block, make it the state of the UTXO tree for the previous block. Then for the purposes of putting the tx in the tree, essentially define the tx hash not as H(d) but as H(b | d) with b equal to the hash of the block where the tx was confirmed. This works because finding a block hash is difficult, and once you have found any valid block hash not using it represents a huge financial hit.

This requires the tree to be completely regenerated for each block, since the hash of every transaction would change.

The attack on the system I suggested would require having lots of transactions that has the same initial bits in their hash.

If you had 1 million of them and added them to the tree, then it would become quite deep.  Each time you add one, the nodes would have to recalculate the entire path from root to that node and redo all the hashing.  The effort per node would be proportional to N squared, where N is the number of collisions.

A protection on that would be ignoring some transactions.

For example, if you have a binary tree, then the expected depth is log2(transactions).

With random hashes, it becomes very unlikely that any nodes would be much deeper than that.

A depth cutoff could be added.  If a branch is more than log2(transactions) * 4 deep, then it hashes to 0, so all nodes below it can be ignored.

This means that they cannot be verified as valid.

Quote
Of course, this doesn't actually work directly, because you usually don't know the block number when a txout was created. So we'll actually do something a bit different

You have to ask for proof anyway, the node could say which block the tx was added to the tree.

So, the hash used would be hash(hashBlock, hashTransaction), where block is the block where the tx was added to the chain.

All unspent txs are pairs anyway, since you need to scan back to the block to get the sig script to check for spending.

You won't know what the block hash is until you generate the transactions, so I think this protects it?

Quote
For every node in your radix tree, store the block number of the tree was last changed.

Ideally, the tree should just depend on what transactions are contained.

Quote
If you really need to prove a tx was in the UTXO set as of the current best block, either wait for another block to be generated, or prove that it isn't in the UTXO set of the previous block, and prove that it is in the merkle tree of the current best block.

You will probably have to do something like that anyway.  The plan was to merge mine, so some bitcoin blocks won't have matching tree root node hashes.  So, prove it wasn't spent up to 10 blocks previous and then check the last 10 blocks manually.
legendary
Activity: 1120
Merit: 1164
Actually, here is an idea that could work: rather than making the generated UTXO set be the state of the UTXO tree for the current block, make it the state of the UTXO tree for the previous block. Then for the purposes of putting the tx in the tree, essentially define the tx hash not as H(d) but as H(b | d) with b equal to the hash of the block where the tx was confirmed. This works because finding a block hash is difficult, and once you have found any valid block hash not using it represents a huge financial hit.

Of course, this doesn't actually work directly, because you usually don't know the block number when a txout was created. So we'll actually do something a bit different:

For every node in your radix tree, store the block number of the tree was last changed. The 0th node, the empty prefix, gets block 0. For the purpose of the radix tree, define the radix hash of the transaction, rh, as rh=H(hn | h) where hn is the hash of that block number, and h is the transaction's actual hash. Ignoring the fact that the transaction hash prefix changed, as can treat the rest of the bytes of the radix tx hash normally to determine which prefix edge is matched, and thus what node to follow down the tree.

Other than having to provide the block hash, proving that a transaction is in the UTXO set is not really any different than before. the proof is still a merkle path, and the security is still based on the infeasibility of reversing a hash function. I think it would be a good idea to add in some of the additional ideas I outlined in https://bitcointalksearch.org/topic/m.1470730, but that is true of any UTXO idea.

If you really need to prove a tx was in the UTXO set as of the current best block, either wait for another block to be generated, or prove that it isn't in the UTXO set of the previous block, and prove that it is in the merkle tree of the current best block.
legendary
Activity: 1232
Merit: 1094
You just can-not assume that hashes calculated directly from data that an attacker has any control of will be uniformly distributed under any circumstance. Someone will get some GPUs together and write a program to find the right hashes to unbalance your tree just because they can.

The effect doesn't really matter though.  It will just slow down accesses for those particular transactions.
legendary
Activity: 1120
Merit: 1164
You can't make that assumption because an attacker might create a bunch of transactions by brute force search that just happen to create an unbalanced tree.

It would only affect the transactions in question.  Also, creating unbalancing transactions requires generating hashes with the same starts, which is hard to do.

There is nothing else in existence that has put as much towards finding statistically unlikely hashes as Bitcoin has! Heck, the Bitcoin network has found hashes with IIRC 66 zero bits.

You just can-not assume that hashes calculated directly from data that an attacker has any control of will be uniformly distributed under any circumstance. Someone will get some GPUs together and write a program to find the right hashes to unbalance your tree just because they can.
legendary
Activity: 1232
Merit: 1094
You can't make that assumption because an attacker might create a bunch of transactions by brute force search that just happen to create an unbalanced tree.

It would only affect the transactions in question.  Also, creating unbalancing transactions requires generating hashes with the same starts, which is hard to do.

Quote
Finally fixing an unbalanced tree has to always be cheap.

If you don't do balancing, then it doesn't require constant merkle tree rehashes.
legendary
Activity: 1120
Merit: 1164
Have you considered that since the hashes are well distributed, you could mostly skip tree balancing.

You can't make that assumption because an attacker might create a bunch of transactions by brute force search that just happen to create an unbalanced tree. If you have a system where hash values determine the balance of the tree, you have no choice but to have some sort of way to measure the amount the tree is out of balance, which might be difficult if not all nodes know the full state of the tree, and prohibit the creation of unbalanced trees. You also need to be careful to ensure that if a tree is at the threshold just prior to being too out-of-balance to be legal there are legal operations that make the tree balanced again so honest miners can fix the problem. Finally fixing an unbalanced tree has to always be cheap.
legendary
Activity: 1232
Merit: 1094
Have you considered that since the hashes are well distributed, you could mostly skip tree balancing.

Basically a PATRICIA-brandais tree without the skip function.  The odds of it helping is pretty low and it adds complexity.

In your trees example, the benefit is that the right hand nodes end up with lower depth.  However, collisions on a sequence of bytes is pretty unlikely, unless you are near the root, and it will be dense there anyway, so you don't get the benefit.

You could also drop the linked list and replace it with a hashmap.  However, most nodes (near the root) will be fully loaded, so maybe better to just use a PATRICIA tree directly.

A compromise would be to have 2 node types.  A node could use an (expanding) hashmap, until it gets to 50% full and then switch to a flat byte array.  However, that is an efficiency thing, and not part of the tree structure.

So, insert a node and then use its bits to determine where to place it, until it gets to a bit that doesn't match any other nodes.
sr. member
Activity: 461
Merit: 251
Am I correct in thinking that the skip string and child keys don't really need to be hashed in at each step running up a Patricia trie, since the hash stored at the end of a given branch is of the full word formed by that branch, and inclusion of that hash in the root is proved regardless?

This would simplify the authentication structure a fair bit, allowing us to avoid having to hash weird numbers of bits, and make it not depend on whether or not we're compressing strings of single child nodes with skip strings (I don't know if this would ever be an advantage).
sr. member
Activity: 461
Merit: 251
But I see the benefit that you don't even really need linked lists at each node, only two pointers.  But you end up with a lot more total overhead, since you have the trienode overhead for so many more nodes...
More specifically, it will need somewhat less than double the amount of nodes, since every node in a 2-way trie has two children - i.e. the same number of extra nodes as the Merkle tree, which we were going to need to include anyway.

Quote
I'll have to think about this one.  I think there is clearly a benefit to a higher branching factor:  if you consider a 2^32-way Trie, there is a single root node and just N trienodes -- one for each entry (so it's really just a lookup table, and lookup is fast).  If you have a 2-way (bitwise) Trie, you still have N leaf nodes, but you have a ton of other intermediate nodes and all the data that comes with them.  And a lot more pointers and "hops" between nodes to get to your lefa.  It leads me to believe that you want a higher branching factor, but you need to balance against the fact that the branching adds some efficiency (i.e. in the case of using linked lists between entries, it would obviously be bad to have a branching factor of 2**32).
Wouldn't the bitwise trie actually require fewer hops, since it doesn't need to traverse through the linked list?  You seem to be saying this and at the same time saying it requires more Smiley

Assuming we were going to be serializing the original nodes and keying them individually in a database, going bitwise shouldn't affect this at all, since we would just prune off a bunch of crap to "zoom out" to a 256-way "macro-node", and conversely build in the details of a macro-node grabbed from storage to "zoom back in" to the bitwise subtrie.

Estimate of the amount of extra overhead this proposal will impose on existing clients

Since the network is switching to using a leveldb store of utxos now, we can easily make this estimate.  I'll ignore for now the fact that we're actually using a tree of tries.  Assuming we're actually storing 256-way macro-nodes, and that the trie is well-balanced (lots of hashes, should be), then for a set of N = 256^L utxos, a branch will have L macro-nodes plus the root to retrieve from/update to disk, instead of just the utxo like we do now.  But it makes sense to cache the root and first level of macro-nodes, and have a "write-back cache policy", where we only do periodic writes, so that the root and first level writes are done only once per batch.  So for a batch size >> 256, we have more like L - 1 disk operations to do per utxo retrieval/update, or L - 2 more than we do now.  That's only one extra disk operation per retrieval/update with L = 3, or N = ~17M utxos total.  Due to the extreme sparsity, it probably doesn't make much sense to permanently cache beyond the first level of macro-nodes (16 levels of bitwise nodes ~2*2^16*(2*32 + 2*8 ) bytes (2*2^16 nodes, 32 bytes per hash, 8 bytes per pointer)  ~ 10MB of permanently cached data), since any macro-nodes in the next level can be expected to be accessed during a batch of, say 10,000 writes, roughly 1/256^2 * 10,000 ~ 0.15 times.  So additional memory requirements should stay pretty low, depending on how often writes are done.

I guess to do it properly and not assume the trie is perfectly balanced, we would have a "LFU cache policy", where we track the frequency with which macro-nodes are accessed, and throw out during writes the least frequently used ones, keeping some desired number of those most frequently used.

Disk operations are almost certainly the bottleneck in this proposal, but they aren't in the main client, so it's possible that this wouldn't add much in the way of noticeable overhead.

Is my math correct?  (There was a small mistake I fixed in an edit.)
Pages:
Jump to: