Pages:
Author

Topic: Scalability - because it's good to have stretch goals - page 4. (Read 3352 times)

sr. member
Activity: 476
Merit: 250
Tangible Cryptography LLC
Other than needing storage space which grows by 35 TB per day you are all set.  While storage capacity likely will grow I doubt anyone is going to have a 12.6 EB drive anytime soon.

On edit: you updated storage ... not storing locally simply trades local storage for even more bandwidth. I say we get to 100 tps then worry about 10,000 tps.  Wondering if Bitcoin can scale to three or four magnitudes larger than the largest global payment network is just silly.
legendary
Activity: 1400
Merit: 1013
I think the wiki article for Scalability is not ambitious enough; Bitcoin should be capable of processing 1 million transactions per second by 2030. This allows a population of 10 billion to each initiate 100 transactions per day.

Can it be done on hardware expected to be available to average users in 2030? What changes would need to be implemented to allow Bitcoin to scale to this level?

Using the ratio from the Bitcoin wiki, 1 million transactions per second requires about 4 gigabits per second. If Nielsen's Law of Internet Bandwidth holds until 2030 this amount of bandwidth will be start to become available to home users by 2022, and should be a small fraction of the average 2030 user's connection. Broadcasting 1 million tps through the network would not appear to be a problem by 2030.

Transmitting blocks as they are currently constructed would be a problem, because right now they include a complete copy of every transaction so require a burst bandwidth much higher than the average in order to distribute blocks in a timely fashion. There's no reason this must remain the case, however. Assuming that all nodes have a method of retrieving transactions which are not in the memory pool, the block could consist of a header and a list of hashes. This reduces the size of a block by a factor of 16 (512 byte transaction / 256 bit hash). A miner could reduce the requirement for burst bandwidth further by pre-announcing the transactions which will occur in their block so that only the nonce and final hash would need to be broadcast when they solve a block.

CPU: If a common 2012 CPU can process 4000 tps, then a 2030 CPU should be able to process over 100 million tps by Moore's Law.

Storage: Requiring each node to store a complete copy of the entire blockchain back to the genesis block is excessive. A distributed, redundant, content-addressable data store could serve the function of storing history and broadcasting transactions. We already have an example of such a datastore in Freenet. Freenet's datastore is useful for this application because nodes automatically specialize without any explicit configuration, and the inter node routing adjusts to demand in real time in order to converge on an optimal configuration. Commonly-requested data is intelligently cached and the store has a provision for pruning unneeded keys when a node runs out of storage space. The P2P aspect of Freenet is slow due to it being optimized for privacy, but without that requirement other performance enhancements are possible.
Pages:
Jump to: