Pages:
Author

Topic: Ultimate blockchain compression w/ trust-free lite nodes - page 10. (Read 87932 times)

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
I was thinking rather than use a Merkle tree at each trie node for verifying the presence of a child, we could use a bitwise trie instead, since it would have faster updates due to always being balanced and not having to sort.  This would also speed up lookups, since we wouldn't have to traverse through a linked list of pointers at each node.

But then we've just arrived in a roundabout sort of way at the whole trie being essentially a bitwise trie, since the nodes in the original trie with 256 bit keysize overlap exactly with subtries of the bitwise trie.

Was there a reason for not proposing a single bit keysize from the start?  I thought about it a while back, but disregarded it for some reason I can't recall.

I thought about the same thing, and disregarded it for some reason, as well.  I don't remember though.  Though, working with raw bits is a bit more complicated than bytes, especially when you have skip strings that are like 13 bits.  But I see the benefit that you don't even really need linked lists at each node, only two pointers.  But you end up with a lot more total overhead, since you have the trienode overhead for so many more nodes...

I'll have to think about this one.  I think there is clearly a benefit to a higher branching factor:  if you consider a 2^32-way Trie, there is a single root node and just N trienodes -- one for each entry (so it's really just a lookup table, and lookup is fast).  If you have a 2-way (bitwise) Trie, you still have N leaf nodes, but you have a ton of other intermediate nodes and all the data that comes with them.  And a lot more pointers and "hops" between nodes to get to your lefa.  It leads me to believe that you want a higher branching factor, but you need to balance against the fact that the branching adds some efficiency (i.e. in the case of using linked lists between entries, it would obviously be bad to have a branching factor of 2**32).

sr. member
Activity: 461
Merit: 251
I was thinking rather than use a Merkle tree at each trie node for verifying the presence of a child, we could use a bitwise trie instead, since it would have faster updates due to always being balanced and not having to sort.  This would also speed up lookups, since we wouldn't have to traverse through a linked list of pointers at each node.

But then we've just arrived in a roundabout sort of way at the whole trie being essentially a bitwise trie, since the nodes in the original trie with 256 bit keysize overlap exactly with subtries of the bitwise trie.

Was there a reason for not proposing a single bit keysize from the start?  I thought about it a while back, but disregarded it for some reason I can't recall.
full member
Activity: 225
Merit: 101
I may have missed you saying it, but in downloading the UTXO data, would it be better to download the entire transaction instead of just the OutPoint? That way, the client can verify the transaction actually matches the hash, and in combination with the header/PoW info and the Merkle branch, that fully authenticates the OutPoint.

Edit: never mind, I see it now. Under the assumption that the validity of the meta chain is enforced as part of the main chain's block validation rules, the meta chain's authentication is enough to verify the validity of the OutPoint.
sr. member
Activity: 461
Merit: 251
Am I neglecting anything?
Maybe just that for privacy reasons, instead of a request for txouts for a specific address, a lightweight client would probably want to submit a bloom filter to its peer that would yield enough false positives to obfuscate his ownership of the address.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Just a random update:  I was updating the top post and want a check on my math for the question:

"In a post-Reiner-Tree world, how much data must a lite-node download to know its balance and spend its coins, from scratch?"

Okay, so, I will assume a wallet has 100 addresses each holding 2 UTXOs and no information/history. 

Pre-Req: Need the headers for the main chain and meta-chain:  meta-chain doesn't exist yet, but let's assume this is 2 years after implementation of meta chain (assume starting now).  There will be about 24 MB of main chain headers, and 8 MB of meta-chain.  Total:  about 30 MB.  (NOTE: If this is eventually integrated into the main-chain coinbase headers (at the risk of invalid blocks if it's wrong), then there is no extra data to download.

Also note, this data may simply be bundled with the app, so it doesn't really feel like a "download" to them.  It will be longer to download the app, but start up basically instantly.  For the remainder of this discussion: I assume that we've already downloaded the headers.

Balance verification info:  Before you even download your own balance and UTXO list, you need to collect and verify the tree information pointing to your addresses.  Once you have that, you can download the data itself and know you got the right thing.

Here I assume d'aniel's idea about merkle trees at each level.  That means the for a single address, I need to download 15 hashes at each tree level to verify that tree-node:  480 bytes per node.  For efficiency, I'll always just download all 256 hashes at the top level (8kB) and the 480 bytes at each subsequent level. 

I assume that the network is "big", with trillions of UTXOs, thus having to go down 6 levels to get to my address.
I assume that all 100 addresses have a different starting byte, maximizing the data I have to download by not sharing any tree-nodes below the first level. 
So the amount of data I download is Top Level (8kB) + 5 more levels (480 bytes * 5 levels * 100 addresses).

Total verification data:  ~250 kB

UTXO data itself:  After you get the verification data, you need the actual UTXOs.  We assume 100 addr * 2UTXOs = 200 UTXOs.  Each UTXO consists of its 36-byte OutPoint followed by its value (8B) and TxOut script (~26 B).  So 35 bytes/UTXO * 200 UTXOs = 7 kB.

Total:  30 MB for header data downloaded once.  After that, you can throw away all wallet information and re-download on every application restart for 250 kB/100 addresses.

As I look at this, I realize that this is pretty darned conservative:  since the individual tree-nodes are compressed, the last 2-3 levels will require only 3-7 hashes each, not all 15.   So probably more like 150-200 kB to 200 UTXOs.

Am I neglecting anything?  This seems pretty darned reasonable!
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
I love your inkscape graphics.  I downloaded it cause of your mention Smiley

It's like VIM:  it's got a bit of a learning curve to be able to use it efficiently, but there's so many shortcuts and hotkeys that you can really fly once you have some experience (and yes, I do 100% of my code development in vim Smiley)

Doesn't it make more sense to start downloading from the bottom of the tree instead of the top?  Say, partition the address space up, and request all UTxOs that lie a given partition - aong with the full bounding branches - and then compute the missing node hashes up to the root.  Inclusion of each partition in some known block is verified, and then we'd just have catch up the delayed partitions separately using full tx data.  The deterministic property of the tree makes the partition syncing trivial, and I assume tx data will be available within some relatively large time window for reorgs and serving, etc.

My brain is fried right now too, I'll have a closer look at what you wrote after some sleep.  Maybe I'm oversimplifying it...

I think we're saying the same thing:  I showed partitioning at the second level, but it really would be any level low enough to meet some kind of criteria (though I'm not sure what that criteria is, if you don't have any of the data yet).  It was intended to be a "partitioning from the bottom" and then filling up to the root once you have it all.  

I imagine there would be a P2P command that says "RequestHashes | HeaderHash | Prefix".  If you give it an empty prefix, that means start at root:  it will give you the root hash, followed by the 256 child hashes.  If you give it a prefix "\x01" it gives you the hash of the node starting at '\x01' and the hashes of its 256 children. This is important, because I think for this to work, you have to have a baseline for what the tree is going to look like for your particular target block.  I think it gets significantly more complicated if you are aiming for partitions that are from different blocks...

Then there'd be another command that says "RequestBranch | HeaderHash | Prefix | StartNode".  The header hash/height would be included only so that peers that are significantly detached from your state won't start feeding you their data.  i.e. Maybe because they don't recognize your hash, or somehow they are more than 100 blocks from the state you are requesting.  If the peer's state is within 100 blocks, they start feeding you that partition, ordered lexicographically.  They'll probably be transferred in chunks of 1000 nodes, and then you put in the next request using the 1000th node as the start node to get the next chunk.  Since we have branch independence and insert order independence, the transfer should be stupid simple.

Also something to note:  I think that the raw TxOut script should be the key for this tree.  Sure, a lot of these will have a common prefix, but PATRICIA trees will compress those anyway.  What I'm concerned about is something like multiple variations of the same address, such as a TxOut using hash160 vs using full public key.  That can lead to stupid side-effects if you are only requesting by addr.  
sr. member
Activity: 461
Merit: 251
I love your inkscape graphics.  I downloaded it cause of your mention Smiley

Doesn't it make more sense to start downloading from the bottom of the tree instead of the top?  Say, partition the address space up, and request all UTxOs that lie a given partition - aong with the full bounding branches - and then compute the missing node hashes up to the root.  Inclusion of each partition in some known block is verified, and then we'd just have catch up the delayed partitions separately using full tx data.  The deterministic property of the tree makes the partition syncing trivial, and I assume tx data will be available within some relatively large time window for reorgs and serving, etc.

My brain is fried right now too, I'll have a closer look at what you wrote after some sleep.  Maybe I'm oversimplifying it...
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
How about this: download whatever non-sychronized UTxO set you can from peers, then start downloading blocks backwards, adding any missing new txouts and removing any that were spent during the download.  Then once you're a few blocks before the time you started the download you could build the tree and make sure it hashes properly.

@d'aniel:  You might be right that it's possible to reconstruct the tree from an amalgamation of closely related states.  Though, I'm concerned that there's too many ways for that to go wrong.  Let's start a thought-experiment:  I have a fresh slate, with no Tx and no UTXO.  I then execute 65,536 requests for data, and download each one from a different peer (each request is for a different 2-byte prefix branch).  I will assume for a moment that all requests execute successfully, and we end up with something like the following:



A couple notes/assumptions about my drawing:  

  • (1) I have drawn this as a raw trie, but the discussion is the same (or very close to the same) when you transition to the Patricia/Brandais hybrid.  Let me know if you think that's a bad assumption.
  • (2) We have headers up to block 1000.  So we ask one peer that is at block 1000 for all 65,536 trie-node hashes.  We verify it against the meta-chain header.
  • (3) We make attempts to download all 65,536 subtrees from a bunch of peers, and end up mostly with those for block 1000, but a few for 995-999, and a couple have given us block 1001-1002 because that was what they had by the time we asked them to send us that branch.  We assume that peers tell us what block they are serving from.
  • (4) Some branches don't exist.  Even though the second layer on the main network they will always be at 100% density, there may be various optimization-related reasons to do this operation at a lower branch level where it's not 100% density.
  • (4a) I've used green to highlight four situations that I don't think are difficult, but need to be aware of them.  Branch \x0202 is where the node hashes at block 1000 say it's an empty node, but is reported as having data by the peer serving us from block 1001.  \x0203 is the same, but with a peer serving block 993 telling us there is data there.  \x0302 and \x0303 are the inverse:  block 1000 has hashes for those trie-nodes, but when requested from peers serving at other points in time, they report empty.
  • (5) Downloading the transactions-with-any-unspent-txouts from sources at different blocks also needs to be looked at.  We do eventually need to end up with a complete list of tx for the tree at block 1000 (or 1002?).  I'm expecting that any gaps can be filled with subsequent requests to other nodes.

So, as a starter algorithm, we acquire all this data and an almost-full UTXO tree.  We also acquire all of the blocks between 993 and 1002.   One branch at a time, we fast forward or rewind that branch based on the tx in blocks 993-1002.  It is possible we will be missing block data needed (due to #5), but I assume we will be able to acquire that info from someone -- perhaps this warrants keeping tx in the node's database for some number of blocks after it is depleted, to make sure it can still be served to other nodes catching up (among other reasons).

On the surface, this looks workable and actually not terribly complicated.  And no snapshots required!  Just ask peers for their data, and make sure you know what block their UTXO tree is at.  But my brain is at saturation, and I'm going to have to look at this with a fresh set of eyes later this week, to make sure I'm not neglecting something stupid.
sr. member
Activity: 461
Merit: 251
How about this: download whatever non-sychronized UTxO set you can from peers, then start downloading blocks backwards, adding any missing new txouts and removing any that were spent during the download.  Then once you're a few blocks before the time you started the download you could build the tree and make sure it hashes properly.
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
Of course, if snapshots end up not being the best solution, then I'm all for that as well.

Well, I am not seeing a way around using snapshots.  I was hoping someone more insightful than myself would point out something simpler, but it hasn't happened yet...

Also, as mentioned earlier, I think snapshots are wildly expensive to store.  I think if a node wants block X and the snapshot is only for block X+/-100, then he can get that snapshot at X and the 100 blocks in between and rewind or fast forward the UTXO tree on his own.  The rewinding and fast-forwarding should be extremely fast once you have the block data. 

Although this does open the question of how nodes intend to use this data.  If it turns out they will want to understand how the blockchain looked at multiple points in time, then perhaps it's worth the effort to store all these snapshots.  If it never happens, then the fast-foward/rewind would be better.  My thoughts on this is:

(1) The gap between snapshots should be considered relative to the size of a snapshot.  My guess is that 100 blocks is smaller than a snapshot, and thus you never need more snapshots than that
(2) Snapshots at various points in time actually won't be that useful, other than helping other nodes download.  These kinda-full-nodes only care about the latest state of the UTXO tree, nothing else.  If you think there's other reasons, please point them out.



The operators of the other nodes may care about holding on to a certain amount of history just as a greater assurance they're really dealing with the real block chain.  One could argue that 1 hour or one day worth of history is enough, but if I'm about to start mining and want to do my part to help the network, I might feel more comfortable if I made it a month or a year, especially if I had the bandwidth to spare.  The more information I hold, the more history I'm "seeding" just in case it's needed in the event of a reorg.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Of course, if snapshots end up not being the best solution, then I'm all for that as well.

Well, I am not seeing a way around using snapshots.  I was hoping someone more insightful than myself would point out something simpler, but it hasn't happened yet...

Also, as mentioned earlier, I think snapshots are wildly expensive to store.  I think if a node wants block X and the snapshot is only for block X+/-100, then he can get that snapshot at X and the 100 blocks in between and rewind or fast forward the UTXO tree on his own.  The rewinding and fast-forwarding should be extremely fast once you have the block data. 

Although this does open the question of how nodes intend to use this data.  If it turns out they will want to understand how the blockchain looked at multiple points in time, then perhaps it's worth the effort to store all these snapshots.  If it never happens, then the fast-foward/rewind would be better.  My thoughts on this is:

(1) The gap between snapshots should be considered relative to the size of a snapshot.  My guess is that 100 blocks is smaller than a snapshot, and thus you never need more snapshots than that
(2) Snapshots at various points in time actually won't be that useful, other than helping other nodes download.  These kinda-full-nodes only care about the latest state of the UTXO tree, nothing else.  If you think there's other reasons, please point them out.

vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
As I said, the simplest is probably to have nodes just spend the space on a snapshot at every retarget, and let nodes synchronize with that (or perhaps every 500 blocks or something, as long as all pick the same frequency so that you can download from lots of sources simultaneously).  After that, they can download the few remaining blocks to update their own tree, appropraitely.

I had come up with a scheme for deciding how long to keep each snapshot I thought would balance space and usefulness well.

If the block height (in binary) ends in 0, keep it for 4 blocks.
If 00, keep for 8 blocks.
If 000, keep for 16 blocks.
If 0000, keep for 32 blocks.
If 00000, keep for 64 blocks... etc. all the way to the genesis block.

That's a generalization of what I proposed:  if(blockheight mod 2016 == 0) {storeSnapshotFor2016Blocks}.  Clearly, the modulus needs to be calibrated...  The problem is these snapshots are very expensive, so we would prefer not to do snapshots at all.  But one may be necessary.  Hopefully not more than that.  Although it would be great if I just overlooked something and we could do this without snapshots at all.

Yes, other than what you're proposing stores snapshots that are equally balanced for the entirety of the blockchain, and I'm proposing one that is biased towards having more options that are all recent.  My reason for having done so is it seems plausible that someone using a client that tracks only the UTXO set might be given a choice as to how far back in the blockchain history they want to request from peers (e.g. 1 hour? 8 hours? 1day? 1week? 1month? 3 months? 6months? 1year? 2years? 4years? everything?) and it would make sense to put the resources in being able to accommodate selections that looked like that, versus ones like 2weeks? 4weeks? 6weeks? ... 1028weeks? 1030weeks? 1032weeks?  etc.

Of course, if snapshots end up not being the best solution, then I'm all for that as well.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
On that note, isn't it actually 15 hashes per full merkle tree of 256 nodes?
Yeah, whoops.

Regarding the issue of synching ones Reiner tree: Smiley is it really a problem this proposal needs to solve?  Couldn't the client just wait to build/update it til after he's caught up with the network in the usual way?

Well, I'm hoping that it will be possible to not need to "catch up with the network" in the current sense.  Certain types of nodes will only care about having the final UTXO set, not replaying 100 GB of blockchain history just to get their 2 GB of UTXO data.  I'd like it if such nodes had a way of sharing these UTXO trees without using too much resources, and without too much complication around the fact that the tree is changing as you are downloading. 

One core benefit of the trie structure is that nodes can simply send a raw list of UTXOs, since insertion order doesn't matter (and thus deleted UTXOs don't need to be transferred).  Sipa tells me there's currently about 3 million UTXOs, so at 36 bytes each, that's about 100 MB to transfer.  There is, of course, the raw transactions with any remaining UTXO that need to be transferred, too -- currently 1.3 million out of about 10million total tx in the blockchain.  So that's probably another few hundred MB.  But still only a fraction of the 4.5 GB blockchain.   


As I said, the simplest is probably to have nodes just spend the space on a snapshot at every retarget, and let nodes synchronize with that (or perhaps every 500 blocks or something, as long as all pick the same frequency so that you can download from lots of sources simultaneously).  After that, they can download the few remaining blocks to update their own tree, appropraitely.

I had come up with a scheme for deciding how long to keep each snapshot I thought would balance space and usefulness well.

If the block height (in binary) ends in 0, keep it for 4 blocks.
If 00, keep for 8 blocks.
If 000, keep for 16 blocks.
If 0000, keep for 32 blocks.
If 00000, keep for 64 blocks... etc. all the way to the genesis block.

That's a generalization of what I proposed:  if(blockheight mod 2016 == 0) {storeSnapshotFor2016Blocks}.  Clearly, the modulus needs to be calibrated...  The problem is these snapshots are very expensive, so we would prefer not to do snapshots at all.  But one may be necessary.  Hopefully not more than that.  Although it would be great if I just overlooked something and we could do this without snapshots at all.
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
Is it possible to read somewhere exactly what is stored in a block in pseudocode?

A block, as it is stored on disk is very straightforward and parsed very easily:

Code:
[MagicBytes(4) || BlockSize(4) || RawHeader(80) || NumTx(var_int) || RawTx0 || RawTx1 || ... || RawTxN] 

The blk*.dat files are just a list of binary sequences like this
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
As I said, the simplest is probably to have nodes just spend the space on a snapshot at every retarget, and let nodes synchronize with that (or perhaps every 500 blocks or something, as long as all pick the same frequency so that you can download from lots of sources simultaneously).  After that, they can download the few remaining blocks to update their own tree, appropraitely.

I had come up with a scheme for deciding how long to keep each snapshot I thought would balance space and usefulness well.

If the block height (in binary) ends in 0, keep it for 4 blocks.
If 00, keep for 8 blocks.
If 000, keep for 16 blocks.
If 0000, keep for 32 blocks.
If 00000, keep for 64 blocks... etc. all the way to the genesis block.
sr. member
Activity: 461
Merit: 251
On that note, isn't it actually 15 hashes per full merkle tree of 256 nodes?
Yeah, whoops.

Regarding the issue of synching ones Reiner tree: Smiley is it really a problem this proposal needs to solve?  Couldn't the client just wait to build/update it til after he's caught up with the network in the usual way?
legendary
Activity: 1428
Merit: 1093
Core Armory Developer
I just realized something that will need to be addressed with any such scheme that is used:  how will a soon-to-be-full-validation-but-pruned node bootstrap off the network?  Sure, we can say that at block X, the entire set of UTXOs is well-defined and compact (in whatever tree structure we're talking about).  But any new node that jumps onto the network will have download that entire UTXO set, and surely peers will have updated their internal UTXO set/tree in at least once while the node is downloading. 

The prevailing idea is that you download the block chain from "archive nodes", which are nodes that retain the full blockchain.

I'm not talking about downloading the entire blockchain.  I'm talking about downloading just the pruned UTXO-tree.  Every pruned-full node knows what the UTXO-tree looks like at a given time, but before it can finish telling you what it looks like, it's updating itself with new tx and blocks and changing it.  I'm not seeing a clean way to transfer pruned-blockchain data from a node that is constantly updating its own meta tree.    Ultimately, it's because knowing what the network looked like at block X (from the pruned blockchain perspective), does not mean it's easy/possible to tell me what it looked like at block X-30.  And it's probably a lot of data and complexity to accommodate taking a "snapshot" just for helping other full nodes catch up.

You could say:  well they should just download the entire transaction history and build the UTXO set themselves.  I'm sure some users will do that, either out of paranoia, or for historic reasons, etc.  But it most definitely shouldn't be a requirement to download 100 GB of history just to get 2 GB worth of UTXOs.   The data needed to become a pruned-but-still-full-validation node is a fraction of the entire-and-ever-growing history, and it would be a pretty significant waste of resources to not be able to download the raw UTXO list to get up and running.

As I said, the simplest is probably to have nodes just spend the space on a snapshot at every retarget, and let nodes synchronize with that (or perhaps every 500 blocks or something, as long as all pick the same frequency so that you can download from lots of sources simultaneously).  After that, they can download the few remaining blocks to update their own tree, appropraitely.

This could affect pruned-lite nodes, too.  If there are updated meta-chain hashes every block, then even transferring small "address branches" to lite nodes could be "interrupted" by new tx/blocks coming in that changes the seeder's UTXO-tree before it's finished.

legendary
Activity: 1596
Merit: 1100
I just realized something that will need to be addressed with any such scheme that is used:  how will a soon-to-be-full-validation-but-pruned node bootstrap off the network?  Sure, we can say that at block X, the entire set of UTXOs is well-defined and compact (in whatever tree structure we're talking about).  But any new node that jumps onto the network will have download that entire UTXO set, and surely peers will have updated their internal UTXO set/tree in at least once while the node is downloading. 

The prevailing idea is that you download the block chain from "archive nodes", which are nodes that retain the full blockchain.

legendary
Activity: 1428
Merit: 1093
Core Armory Developer
I just realized something that will need to be addressed with any such scheme that is used:  how will a soon-to-be-full-validation-but-pruned node bootstrap off the network?  Sure, we can say that at block X, the entire set of UTXOs is well-defined and compact (in whatever tree structure we're talking about).  But any new node that jumps onto the network will have download that entire UTXO set, and surely peers will have updated their internal UTXO set/tree in at least once while the node is downloading.   This means that even if a node can start supplying the UTXO set at block X, within an hour that node will be on X+3, or something like that.  Branches and leaves of the tree will have changed, and that node will not recognize the branches and leaves of the previous state (well, most will be the same, but you get the point).

This is resolved in the main network by having persistent, authenticated information that can be downloaded in chunks (i.e. blocks in the blockchain), which are still valid, no matter how long it's been since that block was created.  Each block can be downloaded quickly, and checked directly against the header chain.  However, in this case, the entire tree is to be authenticated against a block header, and you pretty much have to download the entire tree before you can confirm any part of it.  Seems like this could be a problem...

One idea, which doesn't seem ideal, is that the node simply stores a "snapshot" at every retarget event.  Every new node will have a two week window to download the UTXO set from the latest retarget, and then download the block history that has occurred since then.  If the new node also has a wallet, it can use the absolute latest meta-header to get its own balance and UTXO list and let the user manage their wallet using the still-secure address branches, it just won't be able to fully sync and start full validation until it's done downloading the latest retarget tree.

That's not an ideal solution, but it's food for thought...
vip
Activity: 1386
Merit: 1140
The Casascius 1oz 10BTC Silver Round (w/ Gold B)
How about the "Reiner Tree"?   Tongue  It's funny you asked that, because the other day my dad was asking me if I have any part of Bitcoin named after me, yet...

I get stuff like this too, for being the ultimate bitcoin freak in my circles.

I talk about Bitcoins all the time, am everybody's Bitcoin broker, give away Casascius Coins and paper bitcoins for every occasion, I now have the Utah "BITCOIN" vanity license plate.  People think I'm Satoshi and just not telling, and gifts I have received for birthdays and Christmas are often bitcoin themed, one included a book from the bookstore about "the history of money" spoofed with my coins being part of history: content from my website was mocked up into the style of the book and facetiously glued into the middle of it.
Pages:
Jump to: