Pages:
Author

Topic: Ultimate blockchain compression w/ trust-free lite nodes - page 3. (Read 87868 times)

newbie
Activity: 58
Merit: 0
Could somebody here knowledgeable about the details of UTXO please confirm whether my understanding of how it would work in practice is correct?
legendary
Activity: 1372
Merit: 1002
Due to the hard fork requirements?

This would be a softfork, not a hardfork. Meaning that a majority of miners must update, but not all users.
legendary
Activity: 905
Merit: 1011
Because I don't assign BIP numbers. There's a process to getting a BIP number assigned, which I am still working towards.
donator
Activity: 1419
Merit: 1015
I have posted to the development mailing list the first draft of a BIP for a binary authenticated prefex tree derivative of the one described in this thread. This starts the process of getting it turned into an official BIP standard. The draft is available for viewing here:

https://github.com/maaku/bips/blob/master/drafts/auth-trie.mediawiki

All comments and criticisms are welcome.

I guess my only comment right now is why doesn't this have a BIP number assigned to it? Due to the hard fork requirements?
newbie
Activity: 58
Merit: 0
Could etotheipi negativeone (I *just* realized what that says!) or someone knowledgeable comment on how the "rolling-root"/"ledger-solution" impacts this proposal (whether it enhances it, makes it unnecessary, or is wrong for reasons XYZ)?

Summary: keep the blockchain down to some finite size by moving unspent transactions out of old blocks and into new ones at the head via a "rolling-root" mechanism.

Relevant link: https://bitcointalksearch.org/topic/a-rolling-root-to-solve-bitcoins-scalability-problem-501039
legendary
Activity: 905
Merit: 1011
I have posted to the development mailing list the first draft of a BIP for a binary authenticated prefex tree derivative of the one described in this thread. This starts the process of getting it turned into an official BIP standard. The draft is available for viewing here:

https://github.com/maaku/bips/blob/master/drafts/auth-trie.mediawiki

All comments and criticisms are welcome.
legendary
Activity: 1470
Merit: 1005
Bringing Legendary Har® to you since 1952
I wonder isnt it possible to copy the up-to-date blockchain in the configuration folder and paste it to another computer.
Yep, already done. Several times.

But (obviously) stop Bitcoin client first.
sr. member
Activity: 359
Merit: 250
I wonder isnt it possible to copy the up-to-date blockchain in the configuration folder and paste it to another computer.
erk
hero member
Activity: 826
Merit: 500
I just installed a fresh copy of Bitcoin-Qt v0.85-beta, and it's taken 2 full days so far to download the block chain with 10-20 peers most of the time, and it's still only up to August. This strikes me as something wrong, if I were running a pool or exchange it would be a disaster and I would have lots of unhappy users. Not only does the block chain size need to be looked into, but the replication method. If I were trying to download the same size data using .torrent it would have finished in hours, not days.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
So this was "merged" with another topic... I'm not really sure what that means or why it happened.  But I'm not finding the most recent posts.   The last one I see is June 18.  Any idea what happened?  Any way to get all the posts since then?
legendary
Activity: 1372
Merit: 1002
Why can the miner not use the block-headers only to verify, that he's on the right chain? Is there a reason all the already spent transactions of the past have to be stored somewhere?

Only reading the headers you can only tell, which one is the longest chain if you receive several of them. But you want the longest VALID chain.
Here longest is the chain with more proof of work.
You have to validate all transactions to be able to know that the chain is following the protocol rules.
sr. member
Activity: 350
Merit: 251
Dolphie Selfie
Possible? Yes. Desirable? No. It's important that miners verify that they haven't been duped onto a side chain. It is, however, okay for them to throw away those historical transactions once they have been verified and just keep the UTXO set.

Yeah, I did not mention the UTXO set because I thought it's obivous.

The reason I brought up this is, I believe a lot of us are willing to run a USB miner to secure the network, without generating any noticeable revenue, now that it's out and very power-efficient, the power cost of keeping one running is somehow negligible, but if we have to download and store the rapidly growing full chain, the cost may grow significantly.

The miner could validate the entire history or synchronize with constant storage requirements, throwing away data as it is no longer required.
[/quote]

Why can the miner not use the block-headers only to verify, that he's on the right chain? Is there a reason all the already spent transactions of the past have to be stored somewhere?
legendary
Activity: 1372
Merit: 1002
About SCIP, if the UTXO tree is on each block, it can be done non-recursively.
Clients would donwload the whole header chain, not only with the proof of work but also with the "signature" that proves the transition from the previous UTXO to the current one is correct.

But I don't know SCIP in detail neither or their schedule. So, yes, it would be interesting to have someone that explains this stuff applied ot this first hand.
Can anyone bring Ben to this conversation? I don't even know his nick on the forums...
sr. member
Activity: 461
Merit: 251
Also, was there a verdict on the 2-way (bitwise) trie vs. 256-way + Merkle trees in each node?  I've been thinking lately about sharding block creation/verification, and am noticing the advantages of the bitwise trie since its updates require a much more localized/smaller set of data.

I guess what really matters if the amount of database operations.
if the hash of a node is stored at its parent, (and each node stores the hashes for all its children)
then updating the hash of a node requires only one database read and one write, instead of reading all children (see my previous post).

I noted somewhere a while back in this thread that a group of 2-way nodes can be 'pruned' to produce a (e.g.) 256-way one for storage, and a 256-way one can be 'filled in' after retrieving it from storage to produce a group of 2-way ones.  Not sure what the optimal radix size is for storage, but database operations are definitely the primary concern.

If a write-back cache policy is used so that the 'trunk' of the tree isn't being constantly rewritten on the disk, then using a bitwise trie would mean not having to rebuild full Merkle trees from scratch for each of the upper nodes during every update.  Not sure if this is a big deal though.

Edit: Never mind about that last point.  Merkle tree leaves are rarely ever added or removed in the upper nodes, so updating the Merkle tree would usually only require changing a single path.
sr. member
Activity: 461
Merit: 251
No, you need to be able to prove both, otherwise we're right back where we started from in terms of scalability and lightweight clients.

One data structure is needed for creating transactions, the other is required for validating transactions. It's rather silly and myopic to optimize one without the other, and I will accept no compromise - both will be authenticated and committed so long as I have any say in it.
I'm not sure I follow.  I understand that one of the main benefits of the idea is to be able to prove to a lightweight client that a coin is currently valid; whereas now, when given a Merkle path to a tx in some block, a lightweight client doesn't necessarily know if a txout in this tx was spent in another tx after that block.  That seems like a big improvement.  But the problem isn't that the malicious peer isn't serving data at all, it's that he's serving it in a misleading way.  Isn't it true that either keying ensures peers can't mislead lightweight clients in this way?

Quote
I'm simply reporting how bitcoin works: if you include transactions out of order, your block will not validate.
I appreciate that.  The distributed node idea assumed a couple significant protocol changes, and I just wanted to be sure they wouldn't break anything.
legendary
Activity: 1896
Merit: 1353
Also, was there a verdict on the 2-way (bitwise) trie vs. 256-way + Merkle trees in each node?  I've been thinking lately about sharding block creation/verification, and am noticing the advantages of the bitwise trie since its updates require a much more localized/smaller set of data.

I guess what really matters if the amount of database operations.
if the hash of a node is stored at its parent, (and each node stores the hashes for all its children)
then updating the hash of a node requires only one database read and one write, instead of reading all children (see my previous post).
legendary
Activity: 905
Merit: 1011
The downside to only having the (txid:n, script) one's digest committed in the block header is that you can't concisely prove a utxo with a given script doesn't exist.  But you can still concisely prove when one does, and that seems to be what's really important.

No, you need to be able to prove both, otherwise we're right back where we started from in terms of scalability and lightweight clients.

One data structure is needed for creating transactions, the other is required for validating transactions. It's rather silly and myopic to optimize one without the other, and I will accept no compromise - both will be authenticated and committed so long as I have any say in it.

Quote
Transactions within blocks are processed in-order, and so cannot depend on later transactions.
Okay, but I don't think an individual block really needs to encode an explicit ordering of transactions, as the tx DAG is the same for any valid ordering.  As long as a node can find some ordering that's consistent, then that's good enough.

I'm simply reporting how bitcoin works: if you include transactions out of order, your block will not validate.
sr. member
Activity: 461
Merit: 251
Completely off topic.  I was thinking that " Reiner-Friedenbach tree" is way too long and way too German.  What about the "Freiner tree"? Too cheesy?   It does seem to be an elegant mixture of Reiner, Friedenbach, and Freicoin.

I really wanted to rename this thread to something more appropriate, but I'm not sure what, yet. 
How about "Authenticated dictionary of unspent coins"?

I like it cause it's self-explanatory Smiley
sr. member
Activity: 461
Merit: 251
An index keyed by (txid:n) will have to be maintained for block validation anyway.
Right, I guess that's why it's the natural keying for distributed nodes.

Quote
My current plan is to have one index (hash(script), txid:n) -> balance for wallet operations, and another (txid:n) -> CCoins for validation.
The question then is which one's digest gets included in the block header?  Having both would be nice, but maintaining the (hash(script): txid:n) one seems to make distributing a node a lot more complex and expensive.  The downside to only having the (txid:n, script) one's digest committed in the block header is that you can't concisely prove a utxo with a given script doesn't exist.  But you can still concisely prove when one does, and that seems to be what's really important.  Also, if only one of the trees needs the authenticating structure, then this would be less overhead.

Quote
Transactions within blocks are processed in-order, and so cannot depend on later transactions.
Okay, but I don't think an individual block really needs to encode an explicit ordering of transactions, as the tx DAG is the same for any valid ordering.  As long as a node can find some ordering that's consistent, then that's good enough.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Completely off topic.  I was thinking that " Reiner-Friedenbach tree" is way too long and way too German.  What about the "Freiner tree"? Too cheesy?   It does seem to be an elegant mixture of Reiner, Friedenbach, and Freicoin.

I really wanted to rename this thread to something more appropriate, but I'm not sure what, yet. 
Pages:
Jump to: