Pages:
Author

Topic: New paper: Accelerating Bitcoin's Trasaction Processing - page 4. (Read 36368 times)

sr. member
Activity: 360
Merit: 251
I think the point of being zero knowledge is that you don't have to download the blockchain, no?

Maybe you're confusing zero-trust with zero-knowledge?
legendary
Activity: 905
Merit: 1012
I think the point of being zero knowledge is that you don't have to download the blockchain, no?
legendary
Activity: 1372
Merit: 1002
Guys, what you're talking about doesn't benefit from zero-knowledge, please google "succinct non-interactive argument of knowledge" or "probabilistically checkable proof", i.e. you'd like to have SNARK and not necessarily zk-SNARK here.

Sorry, iddo but I'm just starting to learn about zkp/snark/scip.
I thought snark was just a technique to generalize ZKP, I didn't knew about snark vs zk-snark.
What I mean you need this kind of cryptography if you want all nodes to trust a node's validation without repeating it themselves.

Is it not the case that checking zero-knowledge proofs is more complex than actually doing the calculations directly?  

For example, would checking a zero knowledge proof that a signature is valid be faster than just checking if the signature was valid?

If that was the case for both zkp and snark, then they are useless for distributing the validation workload.
But I think it's not the case. Maybe someone who knows more about this can answer you better.

Anyway, why not just have different nodes validate a sub-set of the transactions.  
...

Yes, that's what "distributing the validation workload" is about.
I'm not sure I understand your proposal or why some nodes can trust others, but this is definitely getting off-topic.
sr. member
Activity: 360
Merit: 251
Guys, what you're talking about doesn't benefit from zero-knowledge, please google "succinct non-interactive argument of knowledge" or "probabilistically checkable proof", i.e. you'd like to have SNARK and not necessarily zk-SNARK here.
legendary
Activity: 1232
Merit: 1094
Maybe there's other ways to distribute the validation workload, but that's what initially comes to mind.

Is it not the case that checking zero-knowledge proofs is more complex than actually doing the calculations directly? 

For example, would checking a zero knowledge proof that a signature is valid be faster than just checking if the signature was valid?

Anyway, why not just have different nodes validate a sub-set of the transactions. 

Each node picks a random byte and only validates transaction inputs that start with that byte.

This could be combined with a system where a broadcast can be made if a block contains an invalid transaction.

Nodes could still check the entire block, as now, except that only 1/256 of the signatures would be checked.

In addition, node would only need to store 1/256 of the transactions in each block.  That reduces disk usage and CPU load.

Checking that minting/tx fees are correct doesn't use much CPU.  The main cost is bandwidth.

Maybe the Blook filter system could be used rather than a prefix.  A node specifies a Bloom filter and only ever receives those transactions.  A reduced size block packet could be sent which just has the hashes of the transactions.

Transaction inputs that point to non-existent transactions are a problem though.  There would need to be some DOS protection for messages that transaction inputs are invalid since they don't point to an actual transaction output.  You can't prove non-existence.
legendary
Activity: 1372
Merit: 1002
jtimon, why zero-knowledge? And relying on a single validator is bad, because that validator could deny txns.

I don't mean a single validator. But if validator A validates tx1 and you don't want all the other validators to also validate tx1, they could use a ZKP provided by validator A.
If validator A doesn't want to validate tx 1, validator B can do it, no problem.

Maybe there's other ways to distribute the validation workload, but that's what initially comes to mind.
Another approach could be to don't validate anything but timestamp everything and leave the validation to non-miner nodes using the sequence of transactions, but that looks less SPV-friendly.
I was just thinking out loud, but this is kind of off-topic.
sr. member
Activity: 360
Merit: 251
jtimon, why zero-knowledge? And relying on a single validator is bad, because that validator could deny txns.
legendary
Activity: 1372
Merit: 1002
Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That's what I understood here:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

I agree with you that it is quite likely that BTC is less scalable than the dollar and other centralized systems, but it is worth it to try and get as close as possible. Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.

That's were we disagree, I think. As close as possible may be sacrificing too much decentralization.
As said, this doesn't mean that the proposal isn't interesting for increasing scalability to a "decentralized-enough level", whatever that is.

Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.

Distributing the validation workload would be ideal. But I think that such a system (if possible) would be very different from bitcoin in several ways.
You would need some zero knowledge proof (ZKP) system so that the rest of the nodes could trust a single validator. I'm also very interested in that kind of proposals.
sr. member
Activity: 266
Merit: 250
Is this another solution to the general's byzantine problem to prevent double spending?  If so can you rewrite the solution in terms of generals taking over a fort so a layman could understand it?  Thanks in advanced.

They propose a different to way to choose the correct chain at forks, which would reduce chain size and allow for a greater amount of transactions per second.
legendary
Activity: 1304
Merit: 1015
Is this another solution to the general's byzantine problem to prevent double spending?  If so can you rewrite the solution in terms of generals taking over a fort so a layman could understand it?  Thanks in advanced.
newbie
Activity: 21
Merit: 0
maaku,

That's very interesting. Can you pinpoint the reason for this? Can you elaborate on your setup?

In theory(*), there shouldn't be a very high networking load at the moment.
Blocks are currently under 1MB, and there is less than one transaction per second. This implies sending 1MB once every 10 minutes, + around 1/2 KB per second for streaming transactions. All of this should be multiplied at most by the number of connections your client has, but in practice, most nodes should receive flooded messages and don't have to forward to others (Edit: in expectation, every node receives a message once, and so every node should in expectation only send each message out once). This shouldn't be too prohibitive...

There is additional load due to new nodes that come online and need to catch up and so may download many blocks at the same time. Perhaps this is the source of the trouble?
(Edit: I'm also ignoring inv messages and other small protocol messages above)



(*) as they say: in theory, there is no difference between theory and practice, in practice, there is.
legendary
Activity: 905
Merit: 1012
I'm not talking about mining nodes - I'm talking about regular old transaction-processing instances of bitcoind. These have declined significantly, according to statistics I've seen shared on #bitcoin-dev. I admit to being a contributing statistic here - I used to run an instance of bitcoind behind my home router, but no longer do because my family complains about the terrible impact it has on their browsing experience. At somewhere less than 1Mbps plus other combinations of factors, I'm already removed from the set of people able to run a full node (although personally I care enough to buy a VPS instead).

Bitcoin derives extensive value from its decentralization. If you increase transaction processing to such a point that even the average techie can no longer run a fully validating relay node with resources they have under their immediate control, then you've drastically changed the nature of the system. If you push further such that datacenter like hardware is required to run a fully validating node.. then what remains as the point? You'd have created a supposedly decentralized system that only the elite can afford to audit.

The question is now "how do we run the bitcoin protocol at a higher tps?" it is "how do we scale up the tps while keeping it decentralized?" That is a much harder problem.
newbie
Activity: 21
Merit: 0
Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That is incorrect. We are already seeing centralization due to the enormous costs of running a full node. The number of full nodes in operation is declining, despite increased awareness of bitcoin. And currently the network propagation time is too small to have any meaningful impact. The trouble spots in scaling bitcoin are elsewhere at this time, I'm afraid.


maaku, how is this related to scalability? You could have 0 transactions going through Bitcoin and still the cost of running a node would be high. The reason for that is high competition in mining brought on by the costs of specialized hardware. I don't see the connection.
legendary
Activity: 905
Merit: 1012
Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That is incorrect. We are already seeing centralization due to the enormous costs of running a full node. The number of full nodes in operation is declining, despite increased awareness of bitcoin. And currently the network propagation time is too small to have any meaningful impact. The trouble spots in scaling bitcoin are elsewhere at this time, I'm afraid.
newbie
Activity: 21
Merit: 0
Hello avivz..

If a heaviest subtree determines current network head, then two things will be possible with a 50% attacker that aren't currently possible now:

1) You can actually strip off head and the chain length can move backwards. (This includes rewinding back past a difficulty retarget, which is currently impossible.[1])

2) You can create a scenario where two mining factions create divergent forks which simply add to their own subtree weight, and the main chain length goes ~nowhere.

With the way bitcoin is currently built, neither of these two possibilities is.. er.. possible. All 50% attacks must, even though they rewrite history, move head forwards.

Whether this has any meaning in terms of risk to bitcoin users I suppose is another matter than the point of my post. Did you address these possibilities in your paper? I ask because I read a good chunk of it, but not all of it, for which I apologise. I am extremely happy to see academic work being done on bitcoin.

At the very least, I'd like to encourage you (as I am a random internet nobody Smiley to create an experimental fork with this idea built in as a testnet alternative.

Hrm.. I suppose in terms of user software, all current infrastructure which makes assumptions about the current rules would be exposed to risk by a switch to ghost-enabled bitcoind..

[1] maaku pointed out people may be confused with my terminology: a replacement of work can "rewind" a chain in the sense that the history is substituted with another history of greater work in a non-GHOST mainline: "Rewinding" in the sense I mean it here is to strip off the numerical topmost block so that mainline head is actually prior to the retarget point: using GHOST orphans it is possible to erase the retarget and make it historically as though it never happened.


I don't think I understood your first point. I'll be happy if you could elaborate: are you referring to something regarding the re-targeting moment?

As for number 2: it seems like if there are two factions insisting on building split subtrees then one of these trees eventually grows larger than the other (even if they are evenly matched! See our analysis of "collapse" in the paper for this exact scenario) in this case one of those factions is not abiding by the protocol if it doesn't switch to the other tree -- this is a 50% attacker.
newbie
Activity: 21
Merit: 0

No, I bet he won't agree on that.
Let's say we want the network to validate 1 million tps. Even if nodes communicate with each other using the ansible (zero block propagation delays) you need all full nodes to validate 1 million tps. How many full nodes do you think we would have?

If people stupidly complain about ASIC miners "causing centralization" or "SHA256 being less democratic than scrypt", wait to see what they have to tell about having a room-sized super-computer (I heard NASDAQ market execution computer is that big) alongside their ASICs machines for Pow. Well, they will need that computer even for full nodes that don't mine.

Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.



I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.

Bitcoin will never be able to process the tps the full dollar system (with all their banks hierarchy) on its own.
We need off-chain transactions, micro-transactions between two parties with transaction replacement, etc.
p2p currencies are a trust-less extreme, but there's other monies that beautifully leverage trust, we need them to complement each other (I'm talking about LETS and other local currencies, ripple-like credit networks, simple IOUs, etc).
Technologies like Freimarkets (the private chains part), two-phase-commit Ripple or Open Transactions are not as trust-less as bitcoin, but they scale much better.
So, yes, this is a tradeoff and we know bitcoin can't scale to be a global near-monopoly money like the usd is today.
Luckily there's monetary innovations out there that will complement bitcoin much better than national currencies do today.

EDIT: I still find the proposal interesting, just wanted to explain that propagation times are not the only obstacle for scalability.

I agree with you that it is quite likely that BTC is less scalable than the dollar and other centralized systems, but it is worth it to try and get as close as possible. Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.
member
Activity: 88
Merit: 37
Also, in the event this paper doesn't become an accepted norm for bitcoin itself, please consider applying the ideas to a p2pool-like mining sharechain. Perhaps there is some way I am currently unable to think of that it could be applied in such a way that DOA and orphans in general just *go away*.

p2pool in some future incarnation is currently the most bitcoin-philosophically-compatible method of mining, and Gavin has stated in the past that some form of it in C++ he would champion getting into the bitcoind mainline in order to enable longtail(my word, not his) mining by random people.
member
Activity: 88
Merit: 37
Hello avivz..

If a heaviest subtree determines current network head, then two things will be possible with a 50% attacker that aren't currently possible now:

1) You can actually strip off head and the chain length can move backwards. (This includes rewinding back past a difficulty retarget, which is currently impossible.[1])

2) You can create a scenario where two mining factions create divergent forks which simply add to their own subtree weight, and the main chain length goes ~nowhere.

With the way bitcoin is currently built, neither of these two possibilities is.. er.. possible. All 50% attacks must, even though they rewrite history, move head forwards.

Whether this has any meaning in terms of risk to bitcoin users I suppose is another matter than the point of my post. Did you address these possibilities in your paper? I ask because I read a good chunk of it, but not all of it, for which I apologise. I am extremely happy to see academic work being done on bitcoin.

At the very least, I'd like to encourage you (as I am a random internet nobody Smiley to create an experimental fork with this idea built in as a testnet alternative.

Hrm.. I suppose in terms of user software, all current infrastructure which makes assumptions about the current rules would be exposed to risk by a switch to ghost-enabled bitcoind..

[1] maaku pointed out people may be confused with my terminology: a replacement of work can "rewind" a chain in the sense that the history is substituted with another history of greater work in a non-GHOST mainline: "Rewinding" in the sense I mean it here is to strip off the numerical topmost block so that mainline head is actually prior to the retarget point: using GHOST orphans it is possible to erase the retarget and make it historically as though it never happened.
legendary
Activity: 1372
Merit: 1002
Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

Peter,

I keep trying to agree with you.

One last try before I give up:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

No, I bet he won't agree on that.
Let's say we want the network to validate 1 million tps. Even if nodes communicate with each other using the ansible (zero block propagation delays) you need all full nodes to validate 1 million tps. How many full nodes do you think we would have?

If people stupidly complain about ASIC miners "causing centralization" or "SHA256 being less democratic than scrypt", wait to see what they have to tell about having a room-sized super-computer (I heard NASDAQ market execution computer is that big) alongside their ASICs machines for Pow. Well, they will need that computer even for full nodes that don't mine.

I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.

Bitcoin will never be able to process the tps the full dollar system (with all their banks hierarchy) on its own.
We need off-chain transactions, micro-transactions between two parties with transaction replacement, etc.
p2p currencies are a trust-less extreme, but there's other monies that beautifully leverage trust, we need them to complement each other (I'm talking about LETS and other local currencies, ripple-like credit networks, simple IOUs, etc).
Technologies like Freimarkets (the private chains part), two-phase-commit Ripple or Open Transactions are not as trust-less as bitcoin, but they scale much better.
So, yes, this is a tradeoff and we know bitcoin can't scale to be a global near-monopoly money like the usd is today.
Luckily there's monetary innovations out there that will complement bitcoin much better than national currencies do today.

EDIT: I still find the proposal interesting, just wanted to explain that propagation times are not the only obstacle for scalability.
full member
Activity: 141
Merit: 100
Quote
Aren't you mixing bytes and bits there? Bps normally means bits per second, not bytes. And 8 bits only for the hashes seems too low. You only get 256 different values, with 200tps you'd have lots of collisions. 8 bytes on the other hand is even more than the poster you replied to was talking about (he mentioned 32 bits, so 4 bytes).

My mistake I was thinking he meant 32 bytes.
Pages:
Jump to: