I think there are a few contradictions and shortcomings in the model you have proposed in the github document:
Let's begin with your following argument
Now that the transactions are built up into larger groups of data, say hundreds or thousands of transactions, nodes can first check if they need to transmit data to their peers by checking a hash. This enables a significant increase in bandwidth efficiency. Transmitting a hash would only take tens of bytes, but would allow the nodes to determine if they need to transmit several kilobytes of transaction data.
{hence} Most importantly, the amount of bandwidth used to create 1,000 TPS blocks is significantly smaller than that of current blockchains
It is based on a false understanding of how bitcoin networking layer works. Actually, it is a while that there is no unnecessary transmission of raw transaction data in bitcoin. If a node is already aware of a transaction, it will never ask for it to be re-transmitted and no peer would 'push' it again. Nodes just check txids included in the blocks by querying the Merkle path and if they ever find an unknown txid they can submit an inv message and ask for the raw data.
So, In your so-called Prime chain, we have no communication efficiency compared to bitcoin as long as nodes present on this chain have to validate the transactions they are committing to.
But nodes do have to validate the transactions, don't they? Otherwise how they could ever be able to commit to their sub-trees (regions/zones)? I think there is a big misunderstanding for you about blockchains, we don't commit to the hash of anything (a block, a transaction, ...) unless we have examined and validated it and we couldn't validate anything without having access to its raw data.
It leads us to another serious issue: state segmentation. Officially you describe the proposal being a combination of state sharding with other techniques:
BlockReduce combines the ideas of incremental work and sharding of state with merge-mining to form a tightly coupled PoW-managed hierarchy of blockchains which satisfies all of the proposed scaling requirements.
Well, it is not completely right, I afraid.
For prime chain being legitimate for committing to region blocks (and so on) they not only need ultimate access to all transaction data of the whole network but they need to keep the whole state preserved and uptodate. There could be no sharding of state.
There is more to discuss but for now I suppose we have enough material to work on.
P.S.
Please note that I'm a fan and I'm not denouncing your proposal as a whole. I'm now thinking of hierarchical sharding as a promising field of investigation, thanks to your work, well done.
Thank you for your comments and feedback. On your first point, I am aware that only hashes of transactions are originally transmitted before all transaction data. There was lack of precision in how I described transaction propagation in the Github BIP and I have updated it accordingly. However, it remains true that a significant volume of traffic is used passing the transaction hashes around. If you look at this
research paper by Bitfury from a few years ago, although slightly out of date, gives a good idea of bandwidth efficiency. One time transmission of data in a peer-to-peer network is highly efficient. The inefficiencies measured have to do with creating a consistent set of transactions. Therefore, if I can incrementally create this set and represent it by a hash as I share it, I will decrease my bandwidth usage significantly because 1 hash will represent 100 or 1000 hashes.
In terms of sharding state. For simplicity of the initial discussions, lets just assume all nodes keep all state and validate all transactions. BlockReduce would reduce the amount of bandwidth needed to propagate a higher number of TPS efficiently. It would increase the need for RAM proportional to the UTXO set, permanent data storage proportional to chain size, and CPU power proportional to TPS. However, none of these items are currently the limiting factor for scaling. The present limiting factor for scaling is bandwidth and addressing this item would allow a significant increase in TPS.
In terms of the other limiting factors, these can be addressed with other mechanisms, but again, lets just discuss the proposal initially in terms of all nodes keeping and validating all state.
Also, for reference I presented this at McCombs Blockchain Conference. The video gives a
quick overview of BlockReduce if anyone is interested in getting a summary that is easier than reading the
paper.