Pages:
Author

Topic: New paper: Accelerating Bitcoin's Trasaction Processing - page 5. (Read 36280 times)

legendary
Activity: 1106
Merit: 1004
If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

Why would you need to be connected to that many full nodes? I don't see the need for so many connections. And SPV nodes only care about the transactions that concern them, which are always a very small set compared to the total that passes by on the network.

At maximum hashes would only need to be 16 8 byte first bits, likely even less.

125 * 8 * 200 = 200000 = 0.2 MBps

Aren't you mixing bytes and bits there? Bps normally means bits per second, not bytes. And 8 bits only for the hashes seems too low. You only get 256 different values, with 200tps you'd have lots of collisions. 8 bytes on the other hand is even more than the poster you replied to was talking about (he mentioned 32 bits, so 4 bytes).
full member
Activity: 141
Merit: 100
If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

125 * 32 * 200 = 800000 = 0.8 MBps

People often forget that.

Even if they can teleport the blocks and the transactions. The hashes alone are > 0.5 MBps.

At maximum hashes would only need to be 16 8 byte first bits, likely even less.

125 * 8 * 200 = 200000 = 0.2 MBps
newbie
Activity: 45
Merit: 0
If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

125 * 32 * 200 = 800000 = 0.8 MBps

People often forget that.

Even if they can teleport the blocks and the transactions. The hashes alone are > 0.5 MBps.
legendary
Activity: 1176
Merit: 1005
Hi all,

I'm a researcher at the Hebrew University in Israel, and I've been working with a student (Yonatan Sompolinsky) on a Bitcoin related paper. We have really exciting results that we are eager to share with the Bitcoin community. We would really like to get constructive feedback on our work from the many smart people in the community, which is why I am posting here.

Thank you for what looks like an excellent solution to a future problem that has troubled me for a while about BTC.  I honestly don't have the brains to analyze it, but just wanted to say thanks.  This kind of work is the road to the future of cryptocurrencies, and it is great that people are doing it.
hero member
Activity: 714
Merit: 500
Martijn Meijering
This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

How much potential is there for a more efficient wire encoding / data compression?
legendary
Activity: 924
Merit: 1122
I agree.  That is NOT a good future for Bitcoin, and Bitcoin as an investment is far less stable than Bitcoin as a currency.

With an investment that is widely held but traded in small volume, a single trade involving a significant amount can make the market swing wildly.  If most coin is in the market most of the time, the same single trade has little effect on the market. 

Also, I consider it more important to undercut competitors for money movement than it is to charge according to the blockspace occupied - so I'd be all for making fees proportional to the amount transferred rather than proportional to the space occupied.  The miners might still prioritize tx by fee per kilobyte, but IMO that ought to mean that transactions in larger amounts naturally have priority to go through faster. 
sr. member
Activity: 266
Merit: 250
Quote
We note that block propagation times are the primary obstacle for scalability.

This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

However, as discussed many times before, people normally don't change the bank just because it is closed during the weekend, they wait until the next Monday to do business instead.

Same, if people realize the bitcoin network worldwide transaction capacity is limited, they will also change their behavior accordingly. When the fee is getting very high, people will plan their transaction more carefully, periodically making large transactions instead of many small transactions. And this is good for bitcoin, since that will make more people treat it as a long term investment, and reduce its volatility


Please don't take us to such a future. High fees pose a barrier to entry for under-developed countries.

A few cents of a dollar may not be much for US or EU citizens, but they're worth 10x here in Argentina, and lot more Africa.

Just for reference: currently if I'd like to sell a 1 Peso (ARG) item using BTC, I would have to pay 0.68 Pesos in fees (using a bare minimum of 0.0001).
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Quote
We note that block propagation times are the primary obstacle for scalability.

This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

However, as discussed many times before, people normally don't change the bank just because it is closed during the weekend, they wait until the next Monday to do business instead.

Same, if people realize the bitcoin network worldwide transaction capacity is limited, they will also change their behavior accordingly. When the fee is getting very high, people will plan their transaction more carefully, periodically making large transactions instead of many small transactions. And this is good for bitcoin, since that will make more people treat it as a long term investment, and reduce its volatility




sr. member
Activity: 360
Merit: 251
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

Right, apologies, I agree that this will eventually re-converge. But there still could be an interesting question here regarding whether it takes a long time for the two fractions to re-converge, depending on the anti-DoS rules that they deploy.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.

Yes, those can be sensible anti-DoS rules. But if the node accepts all blocks in the past month, we may also need a rule to delete the old blocks in the losing subtrees and thereby reclaim disk space. As Andrew Miller said in #bitcoin-wizards, in terms of incentives there's a tradeoff here since keeping some orphans around will potentially save on future bandwidth, but at the cost of storage now.
newbie
Activity: 21
Merit: 0
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.
sr. member
Activity: 360
Merit: 251
2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?
newbie
Activity: 21
Merit: 0
Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.

maaku, can you provide a link to how Bitcoin would prevent similar DoS attacks (without checkpoints that is)?

I'd like to address that critique, but I don't know how it's solved in Bitcoin.
newbie
Activity: 21
Merit: 0
Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

Peter,

I keep trying to agree with you.

One last try before I give up:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.
legendary
Activity: 905
Merit: 1011
2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.
sr. member
Activity: 360
Merit: 251
iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

This is true, so far no one claimed that this chain selection rule cannot withstand DoS (when checkpoints are removed), only that it's unclear whether it can withstand DoS.

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

Those heuristics are trivial, whether they work is less trivial Smiley It seems that when you do such heuristics with the GHOST chain selection rule, it's more sensitive (compared to Bitcoin) to netsplits/convergence issues, and/or to communication blowup as nodes might need to negotiate with their peers regarding which missing solved blocks should be re-transmitted. This will of course be easier to discuss according to a certain specific proposal for anti-DoS rules, but it's probably difficult to design on paper the best rules and have a theoretical proof that they work, therefore (as mentioned in #bitcoin-wizards) the thing that we really need is to do simulations.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

Agreed.
legendary
Activity: 1120
Merit: 1149
Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

This also suggests why the Bitcoin community has talked about the underlying idea in your paper repeatedly(1) among themselves, but no-one has ever bothered to investigate it more fully - it's obviously a bad trade-off in the context of proof-of-work. (with the possible exception of applying the idea to the p2pool share-chain)

h
1) I myself proposed it for my zookeyv key-value global consensus system proposal - #bitcoin-wizards 2013-05-31 - though in the context of proof-of-sacrifice.
legendary
Activity: 905
Merit: 1011
Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.
newbie
Activity: 21
Merit: 0
Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?
newbie
Activity: 21
Merit: 0
Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.


Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.
legendary
Activity: 905
Merit: 1011
iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.
Pages:
Jump to: