Author

Topic: Blockchain size (Read 723 times)

hero member
Activity: 619
Merit: 500
January 05, 2014, 05:16:27 PM
#11
Thank you.

I wonder: what can be more urgent than this, apart from critical bug fixes ?

For example BIP 70 Payment Protocol which should be released with the next major release 0.9 of the reference client (Bitcoin-Qt).

You can read Gavin Andresen's core development update #5 for more things that will come in the near future.
There are also plans to make the download faster.
newbie
Activity: 36
Merit: 0
January 05, 2014, 04:01:11 PM
#10
Thank you.

I wonder: what can be more urgent than this, apart from critical bug fixes ?
hero member
Activity: 619
Merit: 500
January 05, 2014, 03:22:13 PM
#9
Any idea when pruning would be implemented and if that is the agreed way forward?
I don't think there is an ETA for this.
As I said earlier it is not the highest priority right now.

It is actually not downloading the blockchain that is this time consuming.
It is verifying the blocks in it that makes it that slow. Meaning you can speed it up by buying a really fast PC.
newbie
Activity: 36
Merit: 0
January 05, 2014, 12:47:11 PM
#8
About SPV, the wiki states:

'... Satoshi acknowledges this implicitly when he writes that "the verification is reliable as long as honest nodes control the network" -- to be completely pedantic, this means that the verification is reliable as long as honest nodes control the part of the network that the SPV client is able to communicate with. In an attack-by-ISP scenario this may not be a sufficiently strong security property. The attacker would not need to overpower "the rest of the network" because the client is unable to communicate with it...'

I therefore believe that it is important for everyone (ie not only those that can afford >1TB drives) to be able to enjoy the full security possible offered by the protocol.

Any idea when pruning would be implemented and if that is the agreed way forward?
full member
Activity: 125
Merit: 100
January 05, 2014, 12:42:31 PM
#7
Thanks, this is useful information.

I don't think buying 1TB, or 10TB drives will solve the matter, which is not only related to storage but also, and foremost, to download times (trying starting -QT now from scratch, you will see how long it takes to be functional).  Furthermore, the investment in 1TB or more HDD is significant, and would form a barrier of entry for those that cannot afford it, but nevertheless deserve the full security possible with the protocol.

Anyone knows when block pruning is going to be implemented or if anyone is working on it?


Running full node does not mean more security, light clients are secure as well.
newbie
Activity: 36
Merit: 0
January 05, 2014, 12:35:45 PM
#6
Thanks, this is useful information.

I don't think buying 1TB, or 10TB drives will solve the matter, which is not only related to storage but also, and foremost, to download times (trying starting -QT now from scratch, you will see how long it takes to be functional).  Furthermore, the investment in 1TB or more HDD is significant, and would form a barrier of entry for those that cannot afford it, but nevertheless deserve the full security possible with the protocol.

Anyone knows when block pruning is going to be implemented or if anyone is working on it?

full member
Activity: 125
Merit: 100
January 05, 2014, 12:28:08 PM
#5
Well you can use clients like Electrum if you dont want run full node. Hopefully in future Bitcoin-qt will have this option as well. The next possibility is have pruned blockchain out of spent inputs, but full nodes need full blockchain.

But today you can buy 1TB harddrive easily, and the blockchain is just 0.015 TB
hero member
Activity: 619
Merit: 500
January 05, 2014, 12:05:29 PM
#4
One idea is to prune spent outputs (unspendable).

Quote
At very high transaction rates each block can be over half a gigabyte in size.
It is not required for most fully validating nodes to store the entire chain. In Satoshi's paper he describes "pruning", a way to delete unnecessary data about transactions that are fully spent. This reduces the amount of data that is needed for a fully validating node to be only the size of the current unspent output size, plus some additional data that is needed to handle re-orgs. As of October 2012 (block 203258) there have been 7,979,231 transactions, however the size of the unspent output set is less than 100MiB, which is small enough to easily fit in RAM for even quite old computers.
Only a small number of archival nodes need to store the full chain going back to the genesis block. These nodes can be used to bootstrap new fully validating nodes from scratch but are otherwise unnecessary.
The primary limiting factor in Bitcoin's performance is disk seeks once the unspent transaction output set stops fitting in memory. It is quite possible that the set will always fit in memory on dedicated server class machines, if hardware advances faster than Bitcoin usage does.

Quoted from here: https://en.bitcoin.it/wiki/Scalability#Storage
newbie
Activity: 36
Merit: 0
January 05, 2014, 10:25:43 AM
#3
Do you care to elaborate on which agreed solution the developers settled?

It is difficult to understand why this is not considered as a priority.  If Bitcoin usage would take off, the blockchain would explode in size and there would be too little time to agree and implement an alternative solution without significant disruption.  My understanding is that SPV does not seem to offer the security that full nodes provide so they are not a viable alternative.

There would seem to be room for clients that are in-between 'Full' and 'SPV'.  Hybrid clients could agree from time to time (and in a peer to peer manner) on a new baseline for address balances.  They could start with the genesis block and increase until, say, a few days before the last generated block.  This way, hybrid clients would only need to store the last agreed balances + a few days worth of transactions.

Has this been considered?
hero member
Activity: 619
Merit: 500
January 05, 2014, 07:45:18 AM
#2
There are solutions for this that have been discusses but it's not high priority.
newbie
Activity: 36
Merit: 0
January 05, 2014, 06:49:32 AM
#1
The fast increasing size of the blockchain appears to be an issue. 
Have the developers considered starting from a agreed known state once in a while ?
For example, the balance for all non-empty account at the beginning of the year could be a starting point which is agreed by all.
As at 1 January 2014, the compressed file is about 60 MB (2.5M addresses), which is easily manageable.
The balance file would need to be created by at least three different entities to be reasonably sure it is correct.
It would be digitally signed by at least three different entities as well.
Of course, in so far it is possible, the full blockchain would be kept by those interested and would be the definitive record in case of doubt.
Assuming 2.5B non empty addresses, a compressed balance of those addresses would be 60GB, which is still manageable and probably is close to the upper limit of what would be ever required.

Jump to: