Pages:
Author

Topic: New paper: Accelerating Bitcoin's Trasaction Processing - page 7. (Read 36368 times)

full member
Activity: 126
Merit: 100
Well, you could assign channels to geographic regions or industries, but each blockchain is shared by 2 channels.  So if a client in europe (channel 3 let's say) wants to transmit money to a client in America (channel 2) they'd need to use blockchain 2.3, which is distributed on both continents.  For example with 6 channels you might have:

channel 1 blockchains: 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

and all channels share blockchain zero.

(which is a total of 15 blockchains, not 36)

But no blockchain would be restricted to just one continent, because every channel has one blockchain (besides zero) in common with every other channel.  The idea is that because most transactions involve just two people, you will very rarely need a blockchain that spans more than two channels.  blockchain zero is reserved for transactions involving more than two people.

Still, your idea is valid; someone in channel 3 could give you a bitcoin.europe.Gux8854.... payment link to click on. But there's no way to enforce such a breakdown; the guy with the channel 3 address might live in Australia.

I thought it makes more sense to have the chains you described but in addition 6 chains individually for transactions that take place within the channel. So you would have blockchain 0 and 36 subchains:
channel 1 blockchains: 1, 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 2, 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 3, 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 4, 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 5, 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 6, 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

A transaction from channel europe to channel asia will only be recorded in blockchain 0 and blockchain 1.3.
A transaction from channel europe to channel europe will only be recorded in blockchain 0 and 3.

The header of subchain europe-asia will include a field that states the total transaction volume between channels europe to asia or vice versa.
Then channel 0 will include the header of the subchain and directly knows how many coins are within each individual channel.

In comparison to your proposal it has the advantage that someone in channel asia does not need to know a transaction which takes place from channel europe to channel europe. As I read your proposal everyone would have to record transactions which take place within another channel. Therefore I think it makes sense to have one chain per channel and in addition what you proposed a chain per pair of channels.

You can then repeat the procedure and create channels /europe/spain and /europe/italy, but now the channel /europe would act as the parent chain which just records the coin flow between the channels but not within the channels. With this tree structure it would be possible to have a lot of channels with a linear relationship between number of channels and number of chains. For example if the channels above would have each another 6 subchannels you have a total of 1+6+36 (top+second+third level) channels and in total only 1+36+216 chains.
legendary
Activity: 924
Merit: 1132
Well, you could assign channels to geographic regions or industries, but each blockchain is shared by 2 channels.  So if a client in europe (channel 3 let's say) wants to transmit money to a client in America (channel 2) they'd need to use blockchain 2.3, which is distributed on both continents.  For example with 6 channels you might have:

channel 1 blockchains: 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

and all channels share blockchain zero.

(which is a total of 15 blockchains, not 36)

But no blockchain would be restricted to just one continent, because every channel has one blockchain (besides zero) in common with every other channel.  The idea is that because most transactions involve just two people, you will very rarely need a blockchain that spans more than two channels.  blockchain zero is reserved for transactions involving more than two people.

Still, your idea is valid; someone in channel 3 could give you a bitcoin.europe.Gux8854.... payment link to click on. But there's no way to enforce such a breakdown; the guy with the channel 3 address might live in Australia.



full member
Activity: 126
Merit: 100
legendary
Activity: 924
Merit: 1132
How about FOUR THOUSAND (or even FORTY THOUSAND) transactions per second?

This is so cool!  I had come up with a scalability solution of my own earlier this week, and this one MULTIPLIES by it!  

A fundamental change in protocol design has been exactly what Bitcoin needs to continue to be viable as it grows.  This will do it.  Actually either of these will do it.  But combining them is AWESOME!

By combining these two ideas we could get THOUSANDS of transactions per second.  

My scalability solution was to split wallets into multiple "channels" and have one blockchain for each pair of channels.  So when you get to the point where 6 channels (15 blockchains) is having trouble coping with transaction volume, you can shift to 7 channels (21 blockchains).  Every client would be listening on one channel (5 blockchains before the additional channel is added; 6 blockchains afterward), and each channel would contain a total of 1/6 (or 1/7) the total tx volume.  By the time you get up to 20 channels (190 blockchains) you could handle a thousand tx per second even with Bitcoin's current speed.

You'd need one additional blockchain (probably with an extremely low volume of transactions) which would be in all  channels, to handle the rare tx that involves wallets in *more* than 2 different channels (all of them much more complicated than the standard pay-to-script-hash). So each single-channel client would actually be listening to 6 (or 7, or 20) blockchains - but the last one nearly negligible in bandwidth and volume.  

These blockchains would back each other up; every time a new block appears in any blockchain it would list the hashes of the most recent block so far found in all other blockchains.  The mining awards would simply be split between blockchains.  Thus all blockchains would be "merge mined" together.

One downside is that any particular transaction would only be checked by the clients on two channels rather than by all clients.   IE, all users would be relying on tx checked by 1/3 (or 2/7, or 1/10) of total users, so it's no longer a completely trustless model; you'd be trusting a few thousand other users not to collude to deceive you, rather than trusting that ALL of them would not collude to deceive you.

Upsides:

It would be easier to "throttle" bandwidth without refusing altogether to participate, simply by limiting the number of blockchains in which you advertise full node (or "download this blockchain from me") service.  Bandwidth costs could even be made much less onerous than that; see the second point below.

It would be easier to run a fully participating node for one channel.  It would use only 1/6 or 1/7 the total bandwidth and tx checking CPU load, and only 1/6 or 1/7 (or far less; see the next point) the storage, of Bitcoin's single blockchain architecture, spread across all blockchains in your channel).  

There is a huge privacy and bandwidth enhancement:  The length of a particular blockchain could be limited. You could at any time transfer all of a blockchain's UTXO to a different blockchain (and everybody could check that you did it correctly), shut down the now-empty blockchain, and start a new one to replace it.  Effectively this would amount to a very aggressive "pruning" strategy for releasing transaction history -- which would also be a huge contribution to financial privacy and a huge savings in bandwidth and storage.  At any given time, the blockchains in use might average only a week or so old.  Only one blockchain, probably the "all-channels chain" which would have a very light transaction history anyway, would need to stretch all the way back to the Genesis Block.  Of course this wouldn't help against an opponent who keeps copies of all historical blockchains, but it would preclude downloading all transactions in history from any full node, once the blockchain those tx were on has been pruned.

Any particular user who wants to would have the choice to support the protocol more fully by subscribing to multiple (or all) channels if they have enough bandwidth and storage to do so.

Now, what's awesome about this is that these two enhancements multiply by each other. If we can get *EACH BLOCKCHAIN* up to 200 tx per second, then multiplying the number of blockchains as outlined above would be many times as effective.  

With 7 channels, or 21(22) blockchains, we'd be up to over 4000 tx/second, rather than the 150 or so I had been envisioning. With 20 channels (190 blockchains) you could be approaching 40,000 tx/second rather than the 1400 or so I had been envisioning.  
legendary
Activity: 1792
Merit: 1111
I am not sure if I get it correct, but I think we can implement it like this:

1. Let say there are two miners, Alice and Bob, finding blocks of the same height at almost the same time.

2. All miners see both blocks. However, some see Alice's block first, some see Bob's block first.

3. Those seeing Alice's block first will by default mine on top of Alice's block. However, they will also include Bob's block header in the coinbase. (The hash for the previous block could be omitted to save some space)

4. Those seeing Bob's block will similarly put Alice's block header in the coinbase.

5. No matter what the next block is building on, the header of the orphaned block is permanently and irreversibly recorded in the blockchain.

6. The total PoW of the chain should also count the orphaned blocks, given that they satisfy the difficulty. The actual content of the orphaned block is irrelevant since it is only used as a confirmation to the previous block.

There is a drawback: In case there is a fork, SPV nodes have to read the coinbases in order to determine the longest chain. With 1 second blocks that would mean tons of branches and SPV nodes may have some trouble.

I think it is quite workable. We may even implement it without changing the rules first, and see how it works out. It would help strengthening the network even if we keep the 10 minutes block
newbie
Activity: 27
Merit: 0
Are anybody interested in a notary based system with temporary pay-to-script-hash wallets (P2SH) using multisig transactions?

My scheme requires no Bitcoin protocol change, and requires very limited trust in notaries which can be replaced quickly and where malice is easy to prove (just show two conflicting transactions that both got signed, these can be distributed globally in seconds to everybody that has set their clients to trust that notary). This means that notaries has practically nothing at all to gain on an attack, and still makes 0-confirmation transactions trustable.

http://roamingaroundatrandom.wordpress.com/2013/11/30/bitcoin-idea-temporary-notarized-wallets-secure-zero-confirmation-payments-using-temporary-notarized-p2sh-multisignature-wallets/
full member
Activity: 126
Merit: 100
Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."
This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.

Still, I'm not so sure how good it would be to fragment the Bitcoin economy.

Just fragment it once, on say Feb 1, 2014, to test it out and see how everything is working, with the complete understanding of everyone that, if the 'treecoin' implementation works, you will 'fragment' again on March 1st, 2014, with the intent of hard-forking THAT into new Bitcoin.

Therefore, passive users can coordinate to use Bitcoin Chain, knowing that their BTC will convert to Bitcoin Tree on March 1st if all is well, and not worry about any messy stuff in between. Those using Treecoin between Feb and March will get them for free but know that they will eventually lose them, making for a perfect test environment and no financial instability. Best of both worlds.

This is a very good suggestion, how a hardfork should be implemented.
member
Activity: 115
Merit: 10
Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."
This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.

Still, I'm not so sure how good it would be to fragment the Bitcoin economy.

Just fragment it once, on say Feb 1, 2014, to test it out and see how everything is working, with the complete understanding of everyone that, if the 'treecoin' implementation works, you will 'fragment' again on March 1st, 2014, with the intent of hard-forking THAT into new Bitcoin.

Therefore, passive users can coordinate to use Bitcoin Chain, knowing that their BTC will convert to Bitcoin Tree on March 1st if all is well, and not worry about any messy stuff in between. Those using Treecoin between Feb and March will get them for free but know that they will eventually lose them, making for a perfect test environment and no financial instability. Best of both worlds.
member
Activity: 64
Merit: 10
How about splitting the reward equally among all blocks at level h? The exact tx paying the miners would be just included x levels later.
legendary
Activity: 2156
Merit: 1131
Yes please do something.
member
Activity: 115
Merit: 10
Quote
The basic observation behind the protocol modification that we suggest, is that blocks that are off the main chain can still contribute to a chain’s irreversibility. Consider for example a block B, and two blocks that were created on top of it C1 and C2, i.e., parent(C1) =parent(C2)= B. The Bitcoin protocol, at its current form, will eventually adopt only one of the sub-chains rooted at C 1 and C 2, and will discard the other. Note however, that both blocks were created by nodes that have accepted block B and its entire history as correct. The heaviest sub-tree protocol we suggest makes use of this fact, and adds additional weight to block B, helping to ensure that it will be part of the main chain.

Cool!
legendary
Activity: 1232
Merit: 1094
Since high transaction rates imply many conflicting blocks are created, it would be quite useful if these blocks were not really lost. In fact, each block can be seen as supporting not just transactions inside it, but also those embedded in previous blocks. Even if a block is not in the main chain,  we can still count the confirmations it gives previous blocks as valid. This is the basis of our proposed modification, which we call the "Greedy Heaviest-Observed Sub-Tree" chain selection rule.

Roughly speaking, since each block contains a hash of its predecessor, all blocks form a tree structure, rooted at the Genesis Block. Bitcoin currently selects the accepted history as the longest (or rather heaviest) chain in the tree. We suggest another approach: At each fork, pick the sub-tree containing the most blocks (or rather the blocks with greatest combined difficulty).

What is nice about this is that it is backwards compatible.

For blocks that are off the main chain, you only need to send the headers.  This could be made part of the headers first system.  Each node would download the entire header tree first and then download the full blocks.

Quote
In high transaction rates, GHOST builds slightly less blocks in its main-chain (because it doesn't always extend the longest chain), thus slightly lowering the number of transactions accepted per second, but it does so more securely!

The difficulty update would have to be the main chain length, so the block rate would remain the same.

Quote
In fact, we estimate that 1 second blocks can be easily combined with rates of over 200 TPS.

A faster block rate mainly helps with confirmation time rather than transactions per second.  Your node still needs to process the entire block to make sure all the transactions are valid.

A much higher block rate (with lots of blocks thrown away) would mean that nodes would end up verifying multiple blocks.

Quote
This implies quite fast authorization times, but much more importantly, an increased granularity in confirmations. Even 1 confirmation gives some level of certainty regarding irreversibly, and it comes almost instantly when blocks are generated every second.

Since Bitcoin's security relies primarily on the number of confirmations received instead of on elapsed time, we end up getting irreversibility of transactions with very high probability in far less than 10 minutes.  

It depends on the hash value too.  Your transaction needs to have hashing power built on it that is much more valuable than the transaction.  The block rate determines protection against random reversal.
hero member
Activity: 633
Merit: 500
Fair enough.  People fear protocol changes though, so this may be a political issue.
hero member
Activity: 772
Merit: 501
^ This is trust-based, when what's needed is a protocol rule that makes this determination automated.
hero member
Activity: 633
Merit: 500
For merchants wishing for nearly instant security, why not simply have all the large pools sign each valid transaction and send the batch of signatures out to the merchant?

The pools would be verifying that they will include the transaction in their next block, should they find it.  Suppose the merchant is basically OK with 0 confirm purchases for small things, but would like a little more security if he could get it.  He signs up with what we'll call "The Service" and The Service has contracted with the 10 largest pool operators.  For all incoming BTC to the merchants addresses, The Service sends a confirmation request to all the pools.

Will you be accepting this transaction in your next block?

They all say "YES" and depending on the threshold the merchant is willing to live with, perhaps he wants 8 of 10 to immediately say YES, he gets his verification in the form of signatures or no replies.  Double spends are prevented because all the pools already said YES to one or the other, so on merchant is getting perhaps 8 Yes, 1 No, and 1 No Reply.  The other merchant is getting 8 No, 1 Yes, and 1 No Reply.

All this happens before the next block is found.

If a pool says Yes but fails to include the transaction, or says No, but includes it anyway, the pool is either on the hook for the money OR they lose credibility and marketshare.
legendary
Activity: 1708
Merit: 1020
Great idea. We need an altcoin implementing it. Bounty: I pledge 10BTM

edit: "Heaviest branch selection"
newbie
Activity: 21
Merit: 0
I love you!!!
sr. member
Activity: 333
Merit: 252
There's something in the paper about SPV clients: "These nodes will face a
larger number of block headers to download."
Can you give an estimate of how much larger this would be?
legendary
Activity: 1526
Merit: 1134
Copying my repsonse from an email thread:


I really like this paper. It's nice to see a strong theoretical grounding for what some of the rather arbitrary choices in Bitcoin could be.

One thing I note is that there are lots of ways to optimise block wire representations. Sending blocks as lists of hashes, for example, would use 32 byte hashes in our current code just because it's lazily reusing the Bloom filtering path. But 32 bytes is massive overkill for this. You only need to distinguish between transactions in the mempool. You could easily drop to 16, 8 or even 4 byte hashes (hash prefixes) and still be able to disambiguate what the peers block message contains very reliably. Indeed they could be varint encoded so the hash is only as long as it needs to be given current traffic loads. Malicious actors might try to create collisions to force block propagation times up, but if peers negotiate random byte subsets upon connect for relaying of block contents then such an attack becomes impossible.

Given the way the maths works out, fairly simple optimisations like that can keep block messages small even with very large amounts of traffic and a 10 minute propagation time. So the findings in this paper seem pretty reassuring to me. There is so much low hanging fruit for optimising block propagation.

Re: SPV wallets. I'm not sure what kind of probabilistic verification you have in mind. The cost of processing headers is primarily in downloading and storing them. Checking they are correct and the PoWs are correct isn't that expensive. So it's not really obvious to me how to do probabilistic verification. In fact it can be that again smarter encodings just make this problem go away. The prev block hash does not need to be written to the wire as 32 bytes. It can be simply however many bytes are required to disambiguate from the set of possible chain heads. When seeking forward using getheaders there is only ever one possible previous block header, so the largest component of the 80 bytes can be eliminated entirely. Likewise, given the rarity of block header version transitions, the redundant repetition of the version bytes can also be excluded in getheader responses. Those two optimisations together can nearly halve the header size for the most common wire cases, allowing a doubling of the block rate with no disk/bandwidth impact on SPV clients.
member
Activity: 60
Merit: 10
A typo in the paper caught my eye:
"As of November 2014, it is valued.."

You're obviously referring to 2013.
Pages:
Jump to: