Author

Topic: Suppose block size were a non-issue. Would it solve the scaling problem? (Read 1103 times)

legendary
Activity: 924
Merit: 1132
NOTE: Moderated topic.

Block size increase activation is not a technical issue. It is a governance, a political problem.

I think you forgot to hit that self moderate button.

Aw, crap, you're exactly right.  I hit it when I made the topic, but then I went back to edit the topic (ironically, to add the moderation note) and didn't realize I needed to hit the button again.  So ...  Hmm, we've already got some nontechnical crap in this thread and I can't delete the post.  I guess I'm going to lock the thread so it doesn't contribute to the flaming that people can't seem to resist.  Besides, both of the people who've responded on-topic have pointed out the problem; miners would have an incentive to publish bogus blocks in order to force their competitors to waste time.  So, really, the discussion is over.  It floated as an idea, and it got a proper rebuttal. 

It would be relatively easy to make the blocks contain hashes of off-chain bundles that record additional transactions.
...
Nodes getting just the blocks could then easily verify that a block chain has grown from the genesis block and see how much proof-of-work it contains, allowing them to pick valid longest-chains without tracking the bulk of transactions.

I don't quite understand this. Are you talking about full nodes here? If so, then the full node would still have to download all of those bundles, verify all of the transactions, make sure that they hash to the hashes in the block, and check that that hashes to the merkle root.

The bandwidth requirement is still the same and the CPU overhead is slightly higher due to more hashing. A full node has to do this otherwise a malicious miner could be producing malicious blocks or just adding in arbitrary hashes.

Yes, you're right about that.  I was talking about a new node type somewhere between full and lightweight.  It could verify the whole chain of blocks goes to the genesis block, and show that it has more PoW than a bogus chain could have.  And it could verify the tx in whatever bundles it happens to download (because it's getting a payment or verifying that one it sent got into the bundle).  But it wouldn't attempt to check the entire transaction record.

It would be a little less useless than the usual lightweight client, in having the ability to do "spot checking" of the transaction bundles, but not as good for security as a full node.

Spending a txOut would require transmitting both the merkle branch of the txOut in the current txOut set (to show that it hasn't been spent) and the bundle containing the tx record where that txOut originates (so that the client can check the old transaction).  The receiving client could then check the validity of the txOut.  

What is the "merkle branch of the txOut"?
Miners put a merkle root - the hash of a tree root, where the leaves of the tree are unspent txOuts - into each block.  The merkle branch is sufficient information to traverse the tree to the txOut in question - demonstrating that the txOut is part of that data means demonstrating that it hasn't been spent yet.

Anyway, thanks to me forgetting to hit the self-moderate button, and then 2code doing exactly the thing I intended to moderate against, I'm going to lock this topic.  It was a bad idea anyway, as you've pointed out.

staff
Activity: 3458
Merit: 6793
Just writing some code
NOTE: Moderated topic.
I think you forgot to hit that self moderate button.

It would be relatively easy to make the blocks contain hashes of off-chain bundles that record additional transactions.  These bundles could then be whatever size, or they could be one-megabyte and there could be dozens or hundreds as needed.

Nodes getting just the blocks could then easily verify that a block chain has grown from the genesis block and see how much proof-of-work it contains, allowing them to pick valid longest-chains without tracking the bulk of transactions.
I don't quite understand this. Are you talking about full nodes here? If so, then the full node would still have to download all of those bundles, verify all of the transactions, make sure that they hash to the hashes in the block, and check that that hashes to the merkle root. The bandwidth requirement is still the same and the CPU overhead is slightly higher due to more hashing. A full node has to do this otherwise a malicious miner could be producing malicious blocks or just adding in arbitrary hashes.

Spending a txOut would require transmitting both the merkle branch of the txOut in the current txOut set (to show that it hasn't been spent) and the bundle containing the tx record where that txOut originates (so that the client can check the old transaction).  The receiving client could then check the validity of the txOut.  
What is the "merkle branch of the txOut"?

And, poof.  You create another level of "lightweight client" that checks the block chain itself but doesn't check individual transactions except for those transactions that directly affect it.  
It isn't very lightweight if you are still downloading 60+ Gb of blockchain, although I suppose it is lighter than the requirements of a full node.

And the block size no longer limits the transaction rate.

So it would scale better, or at least it wouldn't fail with a hard limit when transaction rates increase.  
Right, but then what happens when someone decides to spam the network, as we have seen in the past? This brings back the old argument against having an unlimited block size, which is what you have essentially proposed.

But would it scale better *enough*?  Regardless of how it's done, lifting the tx rate limit means increasing the bandwidth/storage limit for anybody who's downloading and checking the full transaction record - by the same amount as
if you had increased the block size limit itself.  Because, ultimately, they are the same limit.
It would allow more transactions, but you would eventually run into hardware limitations and potentially stop many users from running full nodes due to the bandwidth and storage requirements. This is a centralizing factor.

One advantage to miners over just increasing the block size: you'd only need to download the block itself to get the ability to form a new valid block, so you still get the propagation times etc of one-megabyte maximum-size blocks.  You aren't particularly penalized for bandwidth, provided you can use your bandwidth FIRST to get the block and THEN start downloading the bundle.  

The disadvantage is that miners who haven't yet finished downloading the transaction bundle would risk orphan blocks if they include any transactions that were available before the previous bundle because they wouldn't know yet whether those tx were in the bundle.  So if a tx didn't make it into the very first block it could have been in, it might be a long-ish time before anybody would risk including it in a new block.  
As sdp, a miner would still have to download the bundles and verify before mining otherwise the block he is building on could be invalid. However, many miners are SPV mining so they aren't validating the block anyways or are doing so in parallel. It won't affect them, but it is bad practice to do so.

I would also like to point out that changing the block structure to do this would require a hard fork anyways.
full member
Activity: 138
Merit: 102
Block size increase activation is not a technical issue. It is a governance, a political problem. The current climate can be characterized as a lack of consensus, a civil war within the Bitcoin community. Never the Bitcoin community suffered from such a serious divide as today. For this very reason it's critical that we get ducks in a row, or as a prominent technologist would say, to reach a consensus.
legendary
Activity: 924
Merit: 1132
Ow.  Yes, that is true.  And miners would have an incentive to make their competitors waste time mining on bogus blocks.

So.  No improvement for the miners, at all. 
sdp
sr. member
Activity: 470
Merit: 281
I would like to point out that in this scenario, a miner who accepts a block with a bundle risks mining a block on an invalid orphaned block unless this miner first also downloads the bundle.   Perhaps this is not a great problem with only dollar millionaires in the game.  Undecided

sdp
legendary
Activity: 924
Merit: 1132

NOTE: Moderated topic.  Inflammatory or inflamed posts arguing for or against block size increases, or being butthurt about the action/inaction taken on that issue, or insulting any on the basis of taking either side of that debate, or alleging or accusing about conspiracies on that topic, or trolling or baiting people who might go nonlinear about the topic, will be deleted.  This is to be a purely TECHNICAL discussion, not political.  Capisce?

It would be relatively easy to make the blocks contain hashes of off-chain bundles that record additional transactions.  These bundles could then be whatever size, or they could be one-megabyte and there could be dozens or hundreds as needed.

Nodes getting just the blocks could then easily verify that a block chain has grown from the genesis block and see how much proof-of-work it contains, allowing them to pick valid longest-chains without tracking the bulk of transactions.

Spending a txOut would require transmitting both the merkle branch of the txOut in the current txOut set (to show that it hasn't been spent) and the bundle containing the tx record where that txOut originates (so that the client can check the old transaction).  The receiving client could then check the validity of the txOut.  

And, poof.  You create another level of "lightweight client" that checks the block chain itself but doesn't check individual transactions except for those transactions that directly affect it.  And the block size no longer limits the transaction rate.

So it would scale better, or at least it wouldn't fail with a hard limit when transaction rates increase.  

But would it scale better *enough*?  Regardless of how it's done, lifting the tx rate limit means increasing the bandwidth/storage limit for anybody who's downloading and checking the full transaction record - by the same amount as
if you had increased the block size limit itself.  Because, ultimately, they are the same limit.

One advantage to miners over just increasing the block size: you'd only need to download the block itself to get the ability to form a new valid block, so you still get the propagation times etc of one-megabyte maximum-size blocks.  You aren't particularly penalized for bandwidth, provided you can use your bandwidth FIRST to get the block and THEN start downloading the bundle.  

The disadvantage is that miners who haven't yet finished downloading the transaction bundle would risk orphan blocks if they include any transactions that were available before the previous bundle because they wouldn't know yet whether those tx were in the bundle.  So if a tx didn't make it into the very first block it could have been in, it might be a long-ish time before anybody would risk including it in a new block.  

Jump to: