Author

Topic: Scalability and blockchain size solution? (Read 1061 times)

sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
August 15, 2015, 02:56:04 PM
#6
Every node should still have and sync a full set of block headers, and any other essentials to verifying that a block connects to genesis and calculate the amount of work for a given chain.
full member
Activity: 219
Merit: 102
August 15, 2015, 02:43:31 PM
#5
I like that these ideas are being entertained and scrutinised. I want to see a distributed block chain but probably don't have the expertise to actually implement what is required. I do hope that someone does pick up some of these ideas for reducing the size on disk and runs with it or fans the spark to an inferno.

My vague idea is that blocks are distributed throughout the clients. No one has all (but you could pull them all if you wanted). There is some replication so a 50GB chain might be 70GB in the cloud. There may also be some overhead of erasure coding taking it up to 100Gb. But that is distributed amongst many, many thousands of clients.

To reconstitute a start point (this master block maybe?). A Client queries some other clients for some random blocks (say genesis, #10 and #100). It makes a hash of them then queries some other clients as to the hash of their genisis, #10 and #100 blocks. If the hashes agree, then the blocks it received are true and can be used to generate your master block. It then repeates the process with #200, #300 and #500, say, and continues to progress towards the master block (whatever that is) but querying clients for blocks they have and comparing hashes with others

Every so often there are checkpoint blocks. These are distributed in places like websites, usenet and other known places and occasionally included when generating a check hash. Once you are up to the latest transactions you then become a provider for others by having some blocks for others to query.

So the idea is that you don't need to keep all the blocks, only the master block (if that's how that part works) and the last N transactions. If a particular block is required to spend or calculate a signature, then it can be requested from the cloud, combined with a couple of other blocks and/or checkpoint blocks, hashed and the hash conformed by a few nodes to verify the block you want is true.

Not sure if that's useful as is, but I think a distributed blockchain s the way forward.
sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
August 15, 2015, 11:39:12 AM
#4
Quote
The master-block can be designed as a dynamic file which compares its values with the majority of the network.
If it “knows” how many old blocks it already consumed, the comparison would be reduced on the difference of the hypothetical block height.
Assume 9 times the same data comes in and once another, the one will be rejected and not implemented.
The genesis block can be changed as simple as the master-block, but the security is still in the reliance on the majority.

You're missing a very important point of Bitcoin. Validity isn't strictly in the majority of users, but the majority of cryptographic work done from genesis to tip of blockchain. Simply comparing cannot be made reliably secure in all cases.
sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
August 15, 2015, 10:22:37 AM
#3
Quote
1. Since the master-block is changed at the same time a new block was found, it cannot be secured by mining. The changes in the master-block completely depend on the 10000 aged block, which was secured before. So the master-block is as secure as the today's blockchain and has to be downloaded the same way at the first time.
If UTXO stores only the balances of any wallet, yes, such a master-block would be just a manifestation of it. (I'm not a pro, so I don't exactly know what is in the UTXO)

2. No one needs the entire chain anymore. After the update, the wallet-balances of block 1 to 360000 (today) will be consolidated in the new master-block. New users only need to download the master-block instead of the entire chain.
Then we have two blocks to process, everytime a new block comes in.
-Incoming (classic) blocks are added to the remaining “10000-chain” the old way.
-The changing balances from the “10001-block” are added to the master-block. After that the “10001-block” will be simply deleted.

3. Let the master-block become 10 GB! Anyway better than Tbs of blockchains in every node.

In regard to number two, you may have misunderstood my question. Say I'm a brand new node without a blockchain, and I want to sync and verify. What data will my node receive, and how will it verify that the current chain of 10k is both derived from the Genesis block, and is a valid chain? How will I verify the master block's validity as well? As I understand, the master block is constructed by processing blocks that are exiting the 10k window. That means that the nodes that need to recreate or verify it will need all old blocks that nobody has anymore, in order to actually from their copy of the master block.
sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
August 15, 2015, 09:20:25 AM
#2
This is an interesting proposal. However, I have a few questions for it:
1. How is the master block secured? Do miners mine on it? Or is is just a manifestation of the UTXO set that is created as old blocks are deleted?
2. I'm assuming that this master block is constructed as nodes process incoming blocks. That means the entire block chain will be needed for new nodes to fetch to build their copy of the master block. Who will have the entire chain?
3. What happens if the data for the master block exceeds 1 GB in size?
newbie
Activity: 2
Merit: 0
August 15, 2015, 09:09:06 AM
#1
Hey guys,
surely not the first time that such a proposal was made. But i found no reasonable explanation why it is not solved the following way:

Every full-node only saves the newest (around) 10000 blocks.
All older transactions will be deleted.
Instead, the balances of older wallets will be stored in kind of a „master-block“, which will have at most 1 GB of data. No big deal! The origin of the wallet-balances will not be reconstructable in a classic block-explorer but remain spendable.
After every new found block, the informations about changing balances from the 10001st aged block to be deleted, is read out and adapted by the master-block.

The blockchain-size then more depends on the number of used wallets and not on transactions.

Positive consequences:
-blocksize can be raised easily to handle more txs per second
-more anonymity, because origin of balances cannot be followed as before
-more full-nodes, higher security

Why not?

Thanks for repying to my first post here..
Jump to: