Author

Topic: Bitcoin 9000: a long-term scaling plan (Read 5975 times)

full member
Activity: 200
Merit: 104
Software design and user experience.
May 03, 2016, 01:44:37 AM
#6
Seems like Github deleted the repo, so I moved the PDF here:
https://github.com/oleganza/bitcoin-papers/blob/master/Bitcoin9000.pdf


So, what is this 9000 exactly?

It's a reference to "It's Over 9000" meme: http://knowyourmeme.com/memes/its-over-9000



legendary
Activity: 1623
Merit: 1608
March 14, 2016, 04:01:20 PM
#5
The document just says: "Safely scale Bitcoin to process over 9000 transactions", but later on it says: "This sequence would yield a total capacity of over 9000 Mb as required."

So, what is this 9000 exactly?
hv_
legendary
Activity: 2534
Merit: 1055
Clean Code and Scale
March 06, 2016, 01:21:46 PM
#4
Under present circumstances it might be realized year 9001... Grin
newbie
Activity: 1
Merit: 0
March 04, 2016, 03:23:52 AM
#3
Love the quote on the front page  Cheesy

Very intriguing paper. A few questions come to mind:

1. - Part 3 can be a soft fork. Does Part 1 & Part 2 require a hard fork?

2. - Have you tested this, if so, could you tell us about it?

--
pxR




legendary
Activity: 1176
Merit: 1134
March 01, 2016, 04:53:48 PM
#2
We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
Nice!

The iguana bitcoin core implements a parallel download where the vast majority of data goes into read only files. This avoids needing a DB and also allows them to be put into a compressed file system vi mksquashfs. By processing the data in several stages, it is possible to stream data in at bandwidth saturation levels. I am not seeing any bottlenecks until it exceeds 500 mbps. The parallel download is able to get 70 to 120 megabytes/sec, which is 12 minutes for all 60GB blockchain.

Using 8 cores all of the data structures are created in parallel with hash tables and bloom filters built into the read only files. I am seeing about a half hour time to get to where things are ready for the last pass. The last pass does the final processing that is needed.

So, the parallel processing somewhat similar to what you write is already in functioning project and it does remove the bottlenecks the DB oriented approaches incur. The only thing that changes into the past is the state of the unspents, but this is encoded into 6 bytes per unspent by assigning a deterministic 32bit integer to each of the high entropy hashes. So the net result is a relatively compact set of utxo. Even the spends data can be processed in parallel once all the blocks are loaded and create a vector of updates to the unspents. By or'ing together these vectors, it creates a current set of unspents relatively quickly.

The searches using the read-only bundles can also be done in parallel, but I am seeing times of about 2 milliseconds for the equivalent of an importprivkey operation on a 1.4Ghz i5 laptop just serially processing the parallel files.

James
newbie
Activity: 1
Merit: 0
March 01, 2016, 04:32:53 PM
#1
We propose a strategy to scale Bitcoin to a far greater throughput and performance than available today while keeping the risk of centralization and costs to a minimum. To achieve this we decrease block validation latency with diff blocks, parallelize transaction validation, enable UTXO sharding with transaction input block height annotations, and deploy a series of extension blocks for sustainable capacity increases.

Download whitepaper:
https://github.com/goodsamaritan9000/scalingbitcoin/raw/master/Bitcoin9000.pdf
Jump to: