Pages:
Author

Topic: Blockchain Compression - page 2. (Read 8657 times)

legendary
Activity: 1120
Merit: 1152
July 04, 2013, 06:31:34 AM
#48
One catch is that unfortunately UTXO commitments are in themselves very dangerous in that they allow you to safely mine without actually validating the chain at all.

You still need the UTXO tree though.  Updating the tree requires that you process the entire block.

That's not true unfortunately. You'll need to do some extra UTXO queries strategically to get the right information on the branches next to the transaction being changed, but with that information you have enough to calculate the next UTXO tree state without having any of the data. This is inherent to having a fast UTXO commitment system in the first place because part of making it cheap to provide those commitments is keeping the data changed for every new block at a minimum - exactly opposite the need to have miners prove they actually have the UTXO set at all.

Ideally, there would be some way to merge multiple sub-blocks into a higher level block.

For example, someone publishes a merkle root containing only transactions which start with some prefix.  A combiner could take 16 of those any produce a higher level merkle root.  The prefix for the combined root would be 4 bits less.

Effectively, you would have 16 sub-chains, where each sub-chain deals with transactions that start with a particular prefix.  Each of those chains could have sub-chains too.

The only way to make it work with Bitcoin would be to have a defined number of transactions per block.  You can only combine two merkle trees into 1 merkle tree if both children have the same number of transactions.

There would also need to be some kind of trust system for the combiners.  If you have to verify, then it defeats the purpose.

A "trusted-identity" would certify the combination (and get fees).  If proof is given that the claim is false, then that id is eliminated.

You're missing a key point: transactions can touch multiple parts of the UTXO set so once the UTXO set is split into subsets participants validate (at minimum) pairs of those subsets.
legendary
Activity: 1232
Merit: 1094
July 04, 2013, 05:44:11 AM
#47
One catch is that unfortunately UTXO commitments are in themselves very dangerous in that they allow you to safely mine without actually validating the chain at all.

You still need the UTXO tree though.  Updating the tree requires that you process the entire block.

Quote
That said I'm pretty sure it's possible to allow those possession proofs to be done in a way where participants can maintain only part of the UTXO set, and thus part of the P2P relay bandwidth, allowing low-bandwidth true mining + larger blocksizes without decreasing the 51% attack threshold and helping solve the censorship problem and keeping thus Bitcoin decentralized. The devil is in the details, but at worst it can be done as a merge-mined alt-coin.

Ideally, there would be some way to merge multiple sub-blocks into a higher level block.

For example, someone publishes a merkle root containing only transactions which start with some prefix.  A combiner could take 16 of those any produce a higher level merkle root.  The prefix for the combined root would be 4 bits less.

Effectively, you would have 16 sub-chains, where each sub-chain deals with transactions that start with a particular prefix.  Each of those chains could have sub-chains too.

The only way to make it work with Bitcoin would be to have a defined number of transactions per block.  You can only combine two merkle trees into 1 merkle tree if both children have the same number of transactions.

There would also need to be some kind of trust system for the combiners.  If you have to verify, then it defeats the purpose.

A "trusted-identity" would certify the combination (and get fees).  If proof is given that the claim is false, then that id is eliminated.
legendary
Activity: 1232
Merit: 1094
July 04, 2013, 04:49:02 AM
#46
The second problem is that you don't need to make these changes to have immediate initial startup. SPV wallets can already do the initial sync in a few seconds. If you want to run a full node too, just run both MultiBit and bitcoind in parallel until the latter is ready. If you want it to be seamless, just bundle bitcoind with a MultiBit and make it connect to your new local node once it's finished syncing. SPV+local full node = full node security, except that it's a very simple implementation, with the advantage that you can seamlessly switch back and forth afterwards.

The reference client should operate that way anyway.

- download headers
- verify scan backwards [ * ]

The client could give the length of the history that has been confirmed.

It could say that it has verified all blocks back to the inputs to all your transactions and how many total blocks have been scanned.

Each transaction could go through different states
0) not synced (RED)
1) scanned back to earliest input and 1k additional blocks (ORANGE)
2) scanned back to a checkpoint (YELLOW)
3) synced back to genesis block (GREEN)

[ * ] The client would have to keep a hashmap of all transactions that have occured in later blocks, so it can detect a double spend.  Effectively, it would be an "unfunded input set" rather than an "unspent output set".

Even level 1 security would be pretty rock solid.  It would require a 1k re-org to be violated.

Quote
I'm not actually convinced it's a good idea though. The user experience of having a full node randomly start syncing in the background because something decided you had enough cpu/disk space is very poor.

You could add CPU limits, so that it never uses more than 10% of any CPU (do 5ms of work and then sleep for 50ms).  Ideally, with a distributed verification engine, 10% of everyone's CPUs would be more than enough to verify the chain.
sr. member
Activity: 461
Merit: 251
July 04, 2013, 01:20:07 AM
#45
That sounds a bit obnoxious, sure, but is it really that big a problem?
Yes: it allows someone to claim they have funds that they have since spent.
So in the event that you are connecting to the same network that someone spending coins to you controls or is colluding with the controller of, then you can get screwed.  I agree, that is a sore spot.  But wouldn't it look suspicious if on some new network you weren't able to find any recognizable peers to connect to?

Quote
Also look at in the broader sense: if you do have UTXO proofs an SPV node can pay for a full-node connection and SPV-related services, either with real funds or a anti-DoS proof-of-work, and be sure that the node is being honest and they are getting accurate data with nothing more than a source of block header info. (relaying of block headers between SPV nodes is something I'm also planning on implementing)
To be fair, all the data is provably accurate, you just don't know if you're being told the whole story.

There are definitely some benefits to this, but there are also costs, and I'm just wondering if there are perhaps other cheaper ways to get practically the same benefits.  Thankfully maaku will be providing us with a sense of the costs pretty soon.
legendary
Activity: 1120
Merit: 1152
July 04, 2013, 12:16:22 AM
#44
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Payment protocols.

Anyway the whole idea of just assuming SPV nodes are going to be able to get the tx data they need from peers for free is really flawed. Running bloom filters against the blockchain data costs resources and at some point people aren't going to be willing to do that for free.

I've got writing "tit-for-tat" peering on my todo list actually: IE silently fail to relay transactions to peers if they don't relay enough new and valid transactions to you for being leeches on the network. Pure SPV clients, like Android clients, can pay for the resources they consume via micropayment channels or at least proof-of-work. It'd also prevent attackers from DoSing the network by making large numbers of connections to large numbers of nodes to fill the incoming peer slots of nodes on the network; you need very little in the way of resources to pull off that attack right now.
Ways to reduce the load off of full nodes, and pay for their efforts seems like good ideas.  But do you expect some SPV nodes to not be able to connect to any full nodes at all at some point in the future?  If there's a market for the service, then surely they will always be able to find some willing to provide it, especially if their security depends on it.

Among other things there is the problem that without the way to prove that a txout doesn't exist the network operator can prevent a SPV node from ever knowing that they have been paid and there is nothing the SPV node can do about it.
That sounds a bit obnoxious, sure, but is it really that big a problem?

Yes: it allows someone to claim they have funds that they have since spent.

Also look at in the broader sense: if you do have UTXO proofs an SPV node can pay for a full-node connection and SPV-related services, either with real funds or a anti-DoS proof-of-work, and be sure that the node is being honest and they are getting accurate data with nothing more than a source of block header info. (relaying of block headers between SPV nodes is something I'm also planning on implementing)
sr. member
Activity: 461
Merit: 251
July 03, 2013, 11:49:10 PM
#43
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Payment protocols.

Anyway the whole idea of just assuming SPV nodes are going to be able to get the tx data they need from peers for free is really flawed. Running bloom filters against the blockchain data costs resources and at some point people aren't going to be willing to do that for free.

I've got writing "tit-for-tat" peering on my todo list actually: IE silently fail to relay transactions to peers if they don't relay enough new and valid transactions to you for being leeches on the network. Pure SPV clients, like Android clients, can pay for the resources they consume via micropayment channels or at least proof-of-work. It'd also prevent attackers from DoSing the network by making large numbers of connections to large numbers of nodes to fill the incoming peer slots of nodes on the network; you need very little in the way of resources to pull off that attack right now.
Ways to reduce the load off of full nodes, and pay for their efforts seems like good ideas.  But do you expect some SPV nodes to not be able to connect to any full nodes at all at some point in the future?  If there's a market for the service, then surely they will always be able to find some willing to provide it, especially if their security depends on it.

Among other things there is the problem that without the way to prove that a txout doesn't exist the network operator can prevent a SPV node from ever knowing that they have been paid and there is nothing the SPV node can do about it.
That sounds a bit obnoxious, sure, but is it really that big a problem?
legendary
Activity: 1120
Merit: 1152
July 03, 2013, 11:30:10 PM
#42
Okay, so the network operator could mislead a node onto his invalid chain by handing it fake fraud challenges.  Can't he do this regardless, by simply refusing to relay valid block headers?

Among other things there is the problem that without the way to prove that a txout doesn't exist the network operator can prevent a SPV node from ever knowing that they have been paid and there is nothing the SPV node can do about it.
legendary
Activity: 1120
Merit: 1152
July 03, 2013, 11:26:48 PM
#41
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Payment protocols.

Anyway the whole idea of just assuming SPV nodes are going to be able to get the tx data they need from peers for free is really flawed. Running bloom filters against the blockchain data costs resources and at some point people aren't going to be willing to do that for free.

I've got writing "tit-for-tat" peering on my todo list actually: IE silently fail to relay transactions to peers if they don't relay enough new and valid transactions to you for being leeches on the network. Pure SPV clients, like Android clients, can pay for the resources they consume via micropayment channels or at least proof-of-work. It'd also prevent attackers from DoSing the network by making large numbers of connections to large numbers of nodes to fill the incoming peer slots of nodes on the network; you need very little in the way of resources to pull off that attack right now.

I addressed this as well in that thread: https://bitcointalksearch.org/topic/m.1407971.  The Merkle tree of transactions is used to authenticate the maximum fee reward calculation in each block.  It didn't seem to require utxo set commitments.

That's a good point, a changed merkle tree can achieve that too. However maaku is quite correct that network operator attacks are trivial without UTXO set commitments, whereas with them the worst a network operator can do is prevent you from getting a fraud proof message; creating fake confirms is extremely expensive.
sr. member
Activity: 461
Merit: 251
July 03, 2013, 11:19:29 PM
#40
Actually, none of the fraud proofs/challenges I mentioned in that thread relied on utxo set commitments.  Transactions with nonexistent txins would benefit from them, since then there could be concise proofs of nonexistent txins, but the way I described it, a peer would issue a challenge to find a valid Merkle branch to an allegedly nonexistent txin from some other peer.  If at least one peer is honest and the network operator (which Bitcoin generally assumes anyway), then they will find the branch if the challenger turned out to be lying, and can ignore that him going forward.

Fixed that for you. Without nonexistance proofs, network operator attacks are trivial.
Okay, so the network operator could mislead a node onto his invalid chain by handing it fake fraud challenges.  Can't he do this regardless, by simply refusing to relay valid block headers?
sr. member
Activity: 461
Merit: 251
July 03, 2013, 11:12:09 PM
#39
Yeah, you can go very far without UTXO set commitments, but without them your scenario only works if you assume your peers are full-nodes. If you are an SPV node with SPV peers - a completely valid scenario that we will need in the future and one that is useful with payment protocols - you're stuck and can't do anything with the fraud proofs.
I don't follow.  How could an SPV node function with only SPV peers?  How would it get the tx data that it needs?

Quote
The other issue is that only UTXO set commitments can prove inflation fraud without a copy of the blockchain, IE miners deciding to changing the subsidy and fee rules and create coins out of thin air.
I addressed this as well in that thread: https://bitcointalksearch.org/topic/m.1407971.  The Merkle tree of transactions is used to authenticate the maximum fee reward calculation in each block.  It didn't seem to require utxo set commitments.

legendary
Activity: 905
Merit: 1012
July 03, 2013, 11:08:09 PM
#38
Actually, none of the fraud proofs/challenges I mentioned in that thread relied on utxo set commitments.  Transactions with nonexistent txins would benefit from them, since then there could be concise proofs of nonexistent txins, but the way I described it, a peer would issue a challenge to find a valid Merkle branch to an allegedly nonexistent txin from some other peer.  If at least one peer is honest and the network operator (which Bitcoin generally assumes anyway), then they will find the branch if the challenger turned out to be lying, and can ignore that him going forward.

Fixed that for you. Without nonexistance proofs, network operator attacks are trivial.
legendary
Activity: 1120
Merit: 1152
July 03, 2013, 10:03:10 PM
#37
For fraud proofs, I think d'aniel raised that before, I'm afraid I don't remember which proofs require the commitments. Double spends don't. At any rate, whilst those proofs would indeed be useful they weren't the rationale given for "ultimate blockchain compression" originally. A full design doc for different kinds of fraud proofs would be useful.
Actually, none of the fraud proofs/challenges I mentioned in that thread relied on utxo set commitments.  Transactions with nonexistent txins would benefit from them, since then there could be concise proofs of nonexistent txins, but the way I described it, a peer would issue a challenge to find a valid Merkle branch to an allegedly nonexistent txin from some other peer.  If at least one peer is honest (which Bitcoin generally assumes anyway), then they will find the branch if the peer turned out to be lying, and can ignore that peer going forward.

Yeah, you can go very far without UTXO set commitments, but without them your scenario only works if you assume your peers are full-nodes. If you are an SPV node with SPV peers - a completely valid scenario that we will need in the future and one that is useful with payment protocols - you're stuck and can't do anything with the fraud proofs.

The other issue is that only UTXO set commitments can prove inflation fraud without a copy of the blockchain, IE miners deciding to changing the subsidy and fee rules and create coins out of thin air.
sr. member
Activity: 461
Merit: 251
July 03, 2013, 09:24:42 PM
#36
For fraud proofs, I think d'aniel raised that before, I'm afraid I don't remember which proofs require the commitments. Double spends don't. At any rate, whilst those proofs would indeed be useful they weren't the rationale given for "ultimate blockchain compression" originally. A full design doc for different kinds of fraud proofs would be useful.
Actually, none of the fraud proofs/challenges I mentioned in that thread relied on utxo set commitments.  Transactions with nonexistent txins would benefit from them, since then there could be concise proofs of nonexistent txins, but the way I described it, a peer would issue a challenge to find a valid Merkle branch to an allegedly nonexistent txin from some other peer.  If at least one peer is honest (which Bitcoin generally assumes anyway), then they will find the branch if the challenger turned out to be lying, and can ignore that him going forward.

Partially verifying nodes is what I ended up being interested in utxo set commitments for, and they seem to be really useful if/when we get efficient verifiable computing.  Though there definitely isn't any near-term need for this.
legendary
Activity: 1120
Merit: 1152
July 03, 2013, 08:51:52 PM
#35
The final issue I have is that you've raised a fair bit of money to work on it, but I didn't see where you got consensus that it should be merged into existing codebases. Perhaps I missed it, but I don't think Gavin said he  agreed with the necessary protocol changes. He may well have accepted your design in some thread I didn't see though.

My conversations with Gavin in the past were that UTXO commitments and UTXO fraud proofs are a necessary precondition to raising the blocksize because without them you have no way of knowing if miners are committing inflation fraud and no way of proving that to others. Other core dev's like Gregory Maxwell share that opinion. If anyone succeeds in creating a robust implementation doing a PR campaign to educate people about the risks of not making it part of a blocksize increase is something I would be more than happy to be involved with, and I'm sure you realize it'll be an even easier message to get across than saying limiting the blocksize is a good thing.

Who doesn't want auditing, fraud prevention and fresh home-made all-American apple pie?

One catch is that unfortunately UTXO commitments are in themselves very dangerous in that they allow you to safely mine without actually validating the chain at all. What's worse is there is a strong incentive to build the code to do so because that capability can be turned into a way to do distributed mining, AKA P2Pool, where every participate has low bandwidth requirements even if the overall blockchain bandwidth requirement exceeds what the participants can keep up with. If miners start doing this, perhaps because mining has become a regulated activity and people want to mine behind Tor, it'll reduce the threshold for what is a 51% attack by whatever % of hashing power is doing mining without validation; we're going to have to require miners to prove they actually posses the UTXO set as part of the UTXO-commitment system. That said I'm pretty sure it's possible to allow those possession proofs to be done in a way where participants can maintain only part of the UTXO set, and thus part of the P2P relay bandwidth, allowing low-bandwidth true mining + larger blocksizes without decreasing the 51% attack threshold and helping solve the censorship problem and keeping thus Bitcoin decentralized. The devil is in the details, but at worst it can be done as a merge-mined alt-coin.

Of course that still doesn't solve the fundamental economic problem that there needs to be an incentive to mine in the first place, but an alt-coin with sane long-term economic incentives (like a minimum inflation rate) can always emerge from Bitcoin's ashes after Bitcoin is destroyed by 51% attacks and such an alt-coin can be bootstrapped by having participants prove their possession or sacrifice of Bitcoin UTXO's to create coins in the "Bitcoin 2" alt-coin, providing a reasonable migration path and avoiding arguments about early-adopters. Similarly adding proof-of-stake to the proof-of-work can be done that way. (note that proof-of-stake may be fundamentally incompatible with SPV nodes; right now that's an open research question) None of this is an issue in the short-term anyway as the subsidy inflation won't hit 1% until around 2032 by the very earliest, even later than that in real terms depending on how many coins turn out to be lost. Applications like fidelity bonds and merge-mined alt-coins that all subsidize mining will help extend the economics incentives to mine longer as well.

Yes, I'm less worried about the blocksize question these days because I'm fairly confident there can exist a decentralized cryptocurrency in the future, but I don't know and don't particularly care if Bitcoin ends up being that currency. I'll also point out that if Bitcoin isn't that currency, the alt-coin that is can be created in a way that destroys Bitcoin itself, for instance by making the PoW be not only compatible with Bitcoin, but by requiring miners to create invalid, or better yet, valid but useless Bitcoin blocks so that increased adoption automatically damages Bitcoin. For instance a useless block can be empty, or just filled with low or zero fee transactions for which the miner can prove they posses the private keys too; blocking non-empty blocks would require nodes to not only synchronize their mempools but also adopt a uniform algorithm for what transactions they must include in a block for the block to be relayed and any deviation there can be exploited to fork Bitcoin. (another argument for requiring miners to prove UTXO set possession) Also note how a 51% attack by Bitcoin miners on the alt-coin becomes an x% attack on Bitcoin. Sure you can mess with the Bitcoin rules to respond, but it becomes a nasty cat-and-mouse game...
legendary
Activity: 2128
Merit: 1073
July 03, 2013, 06:52:28 PM
#34
If you go ahead and rebuild the same database in parallel, that's fine, but unless pruning is implemented you'd eventually end up with the same disk space usage as today (well, more as you need space for the extra indexes).
I don't know if Mike Hearn doesn't know it or just pretends to doesn't know. But it clearly fits his pattern of producing misdirecting arguments. He applied nearly the same misdirection almost a year ago in the "[POLL] Multi-sig or scalability--which is more pressing?" thread.

https://bitcointalksearch.org/topic/m.1046473

Full undo/redo logs for a UTxO database will indeed consume the same (or slightly higher) amount of storage as a raw blockchain storage. But unlike raw blockchain storage undo/redo logs are temporally clustered. The older and older transaction logs can be put in a slower and slower storage. All good DBMS-es have a way of efficiently backing up old transaction logs and restoring them on demand.

This isn't a place to repeat the same old set of arguments that led to creation of the database management systems as an efficient way to exploit temporal clustering in the data sets. Any decent DBMS textbook will have them.
legendary
Activity: 1526
Merit: 1134
July 03, 2013, 05:54:17 PM
#33
I guess I have a few concerns with your utxo commitments project.

The first problem is that it still comes with the ultimate blockchain compression name. That's the reason I'm "insisting" on the original rationale. You've said yourself you know that's wrong and decided to keep it anyway - now look at the first posts in this thread. The name is misleading people into thinking it has something to do with reducing the disk space usage of running a full node. But because you don't actually have a full node if you start from a miner-consensus utxo set, that's not what it does. If you go ahead and rebuild the same database in parallel, that's fine, but unless pruning is implemented you'd eventually end up with the same disk space usage as today (well, more as you need space for the extra indexes).

The second problem is that you don't need to make these changes to have immediate initial startup. SPV wallets can already do the initial sync in a few seconds. If you want to run a full node too, just run both MultiBit and bitcoind in parallel until the latter is ready. If you want it to be seamless, just bundle bitcoind with a MultiBit and make it connect to your new local node once it's finished syncing. SPV+local full node = full node security, except that it's a very simple implementation, with the advantage that you can seamlessly switch back and forth afterwards.

I'm not actually convinced it's a good idea though. The user experience of having a full node randomly start syncing in the background because something decided you had enough cpu/disk space is very poor. If the user ever shuts down their app they'll be surprised to discover that next time they start it up, it's a long way behind and takes ages to catch up. If the user is going to make an explicit decision to run a full node, they can as well just download Bitcoin-Qt and run it themselves - that's really a tiny action compared to the ongoing cost of being a full node.

You say, start from a miner-majority commitment to a UTXO set and do full verification from that point on whilst bringing up a full node in parallel. But is this small window of time where you get marginally better security for the small number of users who would run a full node worth all the extra overhead? It doesn't feel like it. Building the full indexes needed to calculate all those merkle trees has quite some overhead and every node would have to do it, or at least every mining node. That would raise the cost of running a node and require us to redo all the scalability calculations that were already done.

The final issue I have is that you've raised a fair bit of money to work on it, but I didn't see where you got consensus that it should be merged into existing codebases. Perhaps I missed it, but I don't think Gavin said he  agreed with the necessary protocol changes. He may well have accepted your design in some thread I didn't see though.
legendary
Activity: 1078
Merit: 1003
July 03, 2013, 05:33:38 PM
#32
Pieter has a full time job these days so it's harder for him to find the time, but even so, he's done amazing work on scalability and I'm sure he'll be able to finish off the pruning work some time this year

I just want it done already and not potentially see it run into a huge problem that could blow up in all of our faces when we'd most need the solution finished hence why I think it should have priority.

Quite often people on this forum claim that because I work for Google, I must have some incentive to try and centralise Bitcoin

I don't think that at all. What I do think is that working for Google has affected you, I'm sure you'll admit it changed how you view certain things, I mean how could it not.., and what I think is it affected you in such a way where your vision and priorities for Bitcoin and my vision and priorities for Bitcoin differ significantly. That's all. - And of course I could be completely off base here, something I'd be really happy to be corrected on.
legendary
Activity: 905
Merit: 1012
July 03, 2013, 05:19:17 PM
#31
Mike, you're putting up strawmen with your insistence on Alan's original rationale(s). Authenticated, committed UTXO index structures provide a variety of new capabilities some of which Alan et al may not have anticipated. You seem to think these new capabilities are not useful, or at least do not justify their cost, for unstated reasons that you think are obvious. I disagree.

For example, it does matter how long it takes to bootstrap when you're talking about initial user experience, or a user who would prefer to run a full node but not halt operations while syncing. There's a wide variety of reasons people run bitcoin nodes, and I see any reason to expect the associated security requirements to neatly fall into one or two different models.

As for a selective DoS on Bloom filtering, no I haven't observed such a thing, nor have I been looking either. But that's completely beside the point: in the security business it is our job to act preemptively and close attack vectors, ideally before the black-hat work is done to exploit them. As to querying multiple nodes, you are ignoring network operators who could filter or replace protocol message. The index structure, on the other hand, would allow authenticated negative responses as well, so the querying node won't be happy until it receives a proof or presence OR absence.

No one's arguing for UTXO indices over bloom filters. They each have their uses.
legendary
Activity: 1232
Merit: 1094
July 03, 2013, 04:47:47 PM
#30
It'll be written up in BIP form soon. Until then, this link is the first result on Google for "bitcoin payment protocol":

https://github.com/bitcoin/bitcoin/pull/2539

Heh, thanks, I guess I should have Googled.
legendary
Activity: 1526
Merit: 1134
July 03, 2013, 04:41:46 PM
#29
Sure, you can start with SPV security and then convert yourself to full security, but that could be done today with no protocol changes. Pieter calls this "headers first mode". Over time I became less convinced it was a good idea to convert between them like that though. I used to think it was obvious but eventually concluded users should make an explicit choice to run a full node. Once they made that choice it doesn't really matter how long it takes to bootstrap, they're in it for the long haul anyway.

For a node that does a selective DoS on Bloom filtering, have you actually observed anyone do such a thing? What would be the motive and why is querying multiple nodes not enough to fix that?

I can see that with a properly constructed set of Merkle trees a remote node could provide a proof that it hasn't excluded any transactions, assuming you trust the last block. But most real wallets want to know about pending transactions too, and you still need the Bloom filtering for that. So the cost/benefit analysis for using address index commitments to avoid malicious Bloom filter drops feels rather questionable to me.

For fraud proofs, I think d'aniel raised that before, I'm afraid I don't remember which proofs require the commitments. Double spends don't. At any rate, whilst those proofs would indeed be useful they weren't the rationale given for "ultimate blockchain compression" originally. A full design doc for different kinds of fraud proofs would be useful.
Pages:
Jump to: