Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization (Read 71610 times)

full member
Activity: 182
Merit: 100
Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?

You can't build a a block without knowing the previous block.  You don't know that until the new block is finished.

Oh i see, thats why everyone keeps suggesting parallelizing the blockchain and of course that would end up creating two separate networks where one would eventually win out anyways. Turing wins again.
legendary
Activity: 1232
Merit: 1094
Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?

You can't build a a block without knowing the previous block.  You don't know that until the new block is finished.
full member
Activity: 182
Merit: 100
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.
Hmm.  Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header.  Milliseconds amount of time.  Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size?  Plenty of room for "optimization" here were it ever an issue.

Fake headers / tx lists that don't match the actual body?  That's a black mark for the dude who gave it to you as untrustworthy.  Too many black marks and you ignore future "headers" from him as a proven time-waster.

Build up trust with your peers, just like real life.

Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?
hero member
Activity: 1008
Merit: 531
Bitcoin's niche is that you have control of your money.  It is a gold replacement/complement, not a paypal replacement.  Anyone who does not understand this is simply not going to be able to produce a solution that works.

In order for bitcoin to survive the blockchain needs to fit comfortably onto the hard drive of a 5-year-old desktop.  So in 2030 the blockchain needs to fit on a computer built in 2025.  It doesn't need to fit onto phones, xboxes, microwaves, etc... but the requirement that it fit onto a mediocre desktop is non-negotiable.  Failure to meet this requirement means that the chain can no longer be audited.  This is the #1 requirement.  Nothing else matters if this is not true.  All confidence will be lost, and furthermore bitcoin will have no competitive advantage over its rivals.  Any competitive advantage due to ease of use will either be destroyed by regulation or adopted by rivals.

So basically... miner-manipulatable limits are out of the question.
legendary
Activity: 2940
Merit: 1090
It is easy to throw around many more transactions than usual though, and maybe some people figure the more they put out there the more of a block increase they will end up causing. If everyone who wants larger blocks starts moving coins between wallets all day they can get themselves a bunch of mixing done plus maybe help their lobbying efforts.

Any manipulable decision-making system is liable to being manipulated.

-MarkM-
legendary
Activity: 1708
Merit: 1010
Things might be getting interesting already.  The vast majority of blocks for the last 3 hours have been around 240k, the soft limit...  one receipt of mine took 1.5hrs to confirm, which I've never had before.  And this is when hash rate is way above difficulty...

Well, so is the transaction traffic volume way above the norm.  Could be just sellers trying to get bitcoins into their MtGox accounts and buyers trying to get them out.
donator
Activity: 668
Merit: 500
Things might be getting interesting already.  The vast majority of blocks for the last 3 hours have been around 240k, the soft limit...  one receipt of mine took 1.5hrs to confirm, which I've never had before.  And this is when hash rate is way above difficulty...
full member
Activity: 154
Merit: 100
Another method for allowing an increase to the max block size would be to have clients reluctantly accept larger blocks.
For example, when comparing 2 chains, you will accept a longer chain, even if it exceeds max block size, as long as the offending block is buried deep enough.
When working out the size of a block, for MAX_BLOCK_SIZE purposes, you use (block_size) / pow(1.1, depth).  So, a 10MB block that is 100 blocks from the end of the chain has an effective size of 760 bytes.  Depth in this context should probably be proof of work based, rather than number of blocks, but probably doesn't matter much, i.e. (proof of work in the chain since the block) / (proof of work per block at current difficulty).
This could be combined with users being able to set their max_block_size target in a config file.
Miners who mine very large blocks would find that users won't accept them for a while and they would end up with a higher orphan rate.  However, it prevents a permanent hard fork, if the majority of the hashing power accepts the higher block size.  Other clients will eventually accept the new chain.
The point is always, if one block's growth (in terms of difficulty) is larger than the other, let's say there is 10MB block you do not like, you can not just ignore it, you have to estimate if you ignore it you will lose more or if you take you will lose more. It is an economic decision.

This may work, but still large bandwith well-connected mining hubs still have large impact (maybe legitimatelyso), besides, we are just in early stage, once bitcoin have real stream, there will be all kinds of gaming / tweaks of client behaviors to gain an edge.
legendary
Activity: 1232
Merit: 1094
Another method for allowing an increase to the max block size would be to have clients reluctantly accept larger blocks.

For example, when comparing 2 chains, you will accept a longer chain, even if it exceeds max block size, as long as the offending block is buried deep enough.

When working out the size of a block, for MAX_BLOCK_SIZE purposes, you use (block_size) / pow(1.1, depth).  So, a 10MB block that is 100 blocks from the end of the chain has an effective size of 760 bytes.  Depth in this context should probably be proof of work based, rather than number of blocks, but probably doesn't matter much, i.e. (proof of work in the chain since the block) / (proof of work per block at current difficulty).

This could be combined with users being able to set their max_block_size target in a config file.

Miners who mine very large blocks would find that users won't accept them for a while and they would end up with a higher orphan rate.  However, it prevents a permanent hard fork, if the majority of the hashing power accepts the higher block size.  Other clients will eventually accept the new chain.
legendary
Activity: 1232
Merit: 1094
You can't have anything such as "orphan rate" in a rule, as there is no consensus on it.  Nothing makes my idea of orphans, and hence orphan rate, match yours.

I made a suggestion earlier in the thread about how to do it.

You could add an additional field into the block header.  You would have 2 links, the previous block and also, optionally, an orphan block.  The orphan block's previous block link has to be to a block in the current difficulty region.  Also, the check is a header only check, there is no need to store orphan block details.  An orphan with invalid txs would still be a valid orphan.

Since a change to MAX_BLOCK_SIZE is already a fork, you could add this to the header at that time.

The orphan rate is equal to the fraction of blocks with headers that link to a unique orphan block, i.e. if 2 blocks link to the same orphan, then that counts as only 1 orphan.
donator
Activity: 668
Merit: 500
So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

A measure of how fast blocks are propagating is the number of orphans.  If it takes 1 minute for all miners to be notified of a new block, then on average, the number of orphans would be 10%.

However, a core of miners on high speed connections could keep that down and orphans are by definition not part of the block chain.

Maybe add an orphan link as part of the header field.  If included, the block links back to 2 previous blocks, the "real" block and the orphan (this has no effect other than proving the link).  This would allow counting of orphans.  Only orphans off the main chain by 1 would be acceptable.  Also, the header of the orphan block is sufficient, the actual block itself can be discarded.

Only allowing max_block_size upward modification if the difficulty increases seems like a good idea too.

A 5% orphan rate probably wouldn't knock small miners out of things.  Economies of scale are likely to be more than that anyway.

Capping the change by 10% per 2 week interval gives a potential growth of 10X per year, which is likely to be at least as fast as the network can scale.

So, something like

Code:
if ((median of last 2016 blocks < 1/3 of the max size && difficulty_decreased) || orphan_rate > 5%)
 max_block_size /= 8th root of 2
else if(median of last 2016 blocks > 2/3 of the max size && difficulty_increased)
 max_block_size *= 8th root of 2 (= 1.09)
The issue is that if you knock out small miners, a cartel could keep the orphan rate low, and thus prevent the size from being reduced.
You can't have anything such as "orphan rate" in a rule, as there is no consensus on it.  Nothing makes my idea of orphans, and hence orphan rate, match yours.

Just like there is no consensus on the set of unconfirmed transactions on the network.
legendary
Activity: 1176
Merit: 1020
No. It wouldn't happen. As soon as a fork occurred the fx rate of new coins mined on the "weaker" chain would collapse to a few cents as all the major businesses, websites, miners, exchanges would automatically side with the "stronger" chain. If anyone thinks that they can double-spend bitcoins on different websites, after a few hours, (some accepting one fork, some the other) then they are living in dreamland.

So you think there is room for one and only one crypto-currency in the world?  I disagree.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code

Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.

Or there could be two chains - each with its own pros and cons.  While all us early investors would be able to spend on each chain, it should function like a stock split where though we have twice as many 'shares' each one is only worth 50% of the original value.  It could be 90/10 or 80/20 though, or any two percentages summing to 1.  If you wanted to favor one chain over the other, you could sell your coins on one and buy coins in your preferred chain.

No. It wouldn't happen. As soon as a fork occurred the fx rate of new coins mined on the "weaker" chain would collapse to a few cents as all the major businesses, websites, miners, exchanges would automatically side with the "stronger" chain. If anyone thinks that they can double-spend bitcoins on different websites, after a few hours, (some accepting one fork, some the other) then they are living in dreamland.
legendary
Activity: 1176
Merit: 1020

Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.

Or there could be two chains - each with its own pros and cons.  While all us early investors would be able to spend on each chain, it should function like a stock split where though we have twice as many 'shares' each one is only worth 50% of the original value.  It could be 90/10 or 80/20 though, or any two percentages summing to 1.  If you wanted to favor one chain over the other, you could sell your coins on one and buy coins in your preferred chain.
donator
Activity: 668
Merit: 500
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.
Hmm.  Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header.  Milliseconds amount of time.  Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size?  Plenty of room for "optimization" here were it ever an issue.

Fake headers / tx lists that don't match the actual body?  That's a black mark for the dude who gave it to you as untrustworthy.  Too many black marks and you ignore future "headers" from him as a proven time-waster.

Build up trust with your peers, just like real life.
legendary
Activity: 1064
Merit: 1001
That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method.

Yep, I don't think it can be done either. At least, not in a way that can't be gamed. And any system which can be gamed, is really no different than a voting system. So, might as well just make it a voting system and let each miner decide the criteria for how to vote.
legendary
Activity: 1708
Merit: 1010
...how do we collect accurate data upon propagation time?  And then how do we utilize said data ...

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.

Ah, yeah.  That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method.
legendary
Activity: 1064
Merit: 1001
...how do we collect accurate data upon propagation time?  And then how do we utilize said data ...

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.
legendary
Activity: 4760
Merit: 1283
...

Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork.  It's also the purpose of the myrkle tree from the beginning.  Satoshi thought about that, too.


It strikes me that Satoshi seemed more sensitive to system footprint than many of those who came after.  Both in design and in configuration he seemed to have left Bitcoin in a condition which was suitable more for a reliable backing and clearing solution than as a competitive replacement for centralized systems such as PayPal.

By this I mean that the latency inherent in the Bitcoin-like family of crypto-currencies are always going to be a sore point for Joe sixpack to use in native and rigorous form for daily purchases.  And the current block size is a lingering artifact of the time period of his involvement (actually a guess on my part without looking through the repository.)

I was disappointed that (now) early development focus was on wallet encryption, prettying up the GUI, and the multi-sig stuff if this came at the expense of merkle-tree pruning work.  I personally decided to make lemonade of lemons to some extent in noting that although I thought the priorities and direction were a bit off, the chosen course would probably balloon the market cap more quickly and I could try to make a buck off it no matter what the end result of Bitcoin might be.

legendary
Activity: 1708
Merit: 1010

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?
https://bitcointalksearch.org/topic/the-swarm-client-proposal-reminder-15-btc-pledged-so-far-now-worth-3255-87763
(Second search link Tongue)

Quote
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 
The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.

To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.
This history cannot be faked due to the nature of the blocks tree-data-structure.


Not true.  A double spend would occur at nearly the same time.  Due to propogation rules that apply to loose transactions, it's very unlikely that any single node (swarm or otherwise) will actually see both transactions.  And what if it did?  If it could sound an alarm about it, which one is the valid one?  The nodes cannot tell.  And even responding to an alarm impies some degree of trust in the sender, which open up an attack vecotr if an attacker can spoof nodes and flood the network with false alarms.


Furthermore, a double spend can't eget into a block even if that miner doesn't bother to validatie it first, since that would imply that the miner is participating in an attack on the network himself, since he shouldn't be able to see both competing transactions.
Quote
Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.

This would serve little purpose, since addresses are created and abandoned at such a rapid rate.

Quote
There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.

Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork.  It's also the purpose of the myrkle tree from the beginning.  Satoshi thought about that, too.
Pages:
Jump to: