Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 11. (Read 71590 times)

full member
Activity: 166
Merit: 101
50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.

Expressing the cost of fees as a percentage of the monetary base is a nice way to quantify the cost of fees. Although I should point out that all of the baked-in constants in my proposal are examples and subject to tuning before implementation.

Every BTC User has a point where he isn't willing to pay even more fees to get a transaction processed. Once this point is reached it will be more providable to allow more transactions.

Some points regarding fees and block size:

1. If the block size is too large, fees will drop from an absence of scarcity
2. If the block size is too small, users at the margins will leave the system (fees too high)
3. Smaller block sizes are preferred to larger ones (more independent miners possible)

The ideal block size is the smallest block size that drives fees up to the threshold of what users are willing to pay.

How do we determine what users are willing to pay?

Quote
So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?

It's a difficult question.

Glad we are thinking along the same lines, though.  So 3% p.a. (12 BTC per block) it is then!  (joking)
legendary
Activity: 1064
Merit: 1001
50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.

Expressing the cost of fees as a percentage of the monetary base is a nice way to quantify the cost of fees. Although I should point out that all of the baked-in constants in my proposal are examples and subject to tuning before implementation.

Every BTC User has a point where he isn't willing to pay even more fees to get a transaction processed. Once this point is reached it will be more providable to allow more transactions.

Some points regarding fees and block size:

1. If the block size is too large, fees will drop from an absence of scarcity
2. If the block size is too small, users at the margins will leave the system (fees too high)
3. Smaller block sizes are preferred to larger ones (more independent miners possible)

The ideal block size is the smallest block size that drives fees up to the threshold of what users are willing to pay.

How do we determine what users are willing to pay?

Quote
So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?

It's a difficult question.

full member
Activity: 166
Merit: 101
Interesting to see the various proposals for an adaptive protocol level maximum block size.

It seems clear that adaption should occur based on transaction fees, since they are supposed to take over as the main incentive for securing the transaction log once initial distribution winds down further.  This means that this is the closest so far to achieving an equilibrium based on incentives which optimise for the properties I, as a bitcoin user, want: https://bitcointalksearch.org/topic/m.1507328.  That is: first and foremost I want the (transaction log of the) network to be really well secured.  Once that is achieved, I want more transactions to be possible, so long as doing so doesn't destroy incentives for those securing the network.

That said, I think the proposed rate is too high.  We need to budget what *transactors* in the system should need to pay in order to ensure robust security of the transaction log, and not step too far over that level when designing the equilibrium point.  50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.  This seems excessive, though admittedly it is what *holders* of bitcoins are currently paying via the inflation schedule. 

Although it is difficult to estimate, the level of transaction fees required, long term, to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set (assuming one doesn't buy Mike's optimism about those incentives being solved by other means).

Suppose the system needs 3% of the monetary base to be spent per annum on securing the transaction log.  Then, in the long term, that works out at pretty much 12 BTC per block.  Could just call it 12 BTC per block from the start to keep it simple.  So once the scheme is in place and max block size is still 1 MiB, the mean transaction fee over the last N blocks will need to 0.0029 BTC to provoke an increase in max block size.  That seems pretty doable via market forces.  Then, block size increases, and mean transaction fee decreases, but total transaction fees remain around the same, until an equilibrium is reached where either block space is no longer scarce, or enough miners, for other reasons, decide to limit transaction rate softly.

So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?  It's really a risk management question.  As is most of the rest of the design of Bitcoin.
legendary
Activity: 1708
Merit: 1010
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

Which i did mention in my next few posts.

Satoshi was not an experienced programmer.
Are you freaking kidding me? He programmed the entire Bitcoin client with all protocol rules/scripting/validation working almost bug free from day 1. 4 years later with a market cap of $300 million, i don't even know of 1 other full client considering they have the full source code of the Satoshi client to refer to. That was a feat for a single person!
And you think he would not be able to implement something as simple as changing the block format to only contain hashes?

Satoshi genius was in his unique ability to see the big picture & predict the problems.  Programming was not likely a professional skill for him.  Ask Gavin about that.  Satoshi was more likely a professional in the field of economics, perhaps a professor but not likely, since Austrian economic theory doesn't tend to get much academic respect.  He also had a great understanding of cryptographic theories, so he likely had a strong mathmatics background, but he didn't use any novel crypto, he just used them in a novel way.  Satoshi deserves respect for what he started, but the current vanilla client is mostly not Satoshi's code.  And there were bugs, I was there.
full member
Activity: 150
Merit: 100
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

Which i did mention in my next few posts.

Satoshi was not an experienced programmer.
Are you freaking kidding me? He programmed the entire Bitcoin client with all protocol rules/scripting/validation working almost bug free from day 1. 4 years later with a market cap of $300 million, i don't even know of 1 other full client considering they have the full source code of the Satoshi client to refer to. That was a feat for a single person!
And you think he would not be able to implement something as simple as changing the block format to only contain hashes?
legendary
Activity: 1708
Merit: 1010
Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).


Hmmm, not really the answer to my question. When a block is found, don't you have to download the whole block to see which transactions they included so that you can build the merkle tree?


No, as transactions are uniquely identifiable by their hash.  The block report need only contain the block header and the murkle tree of hashes.

Quote

The removal of redundancy that Jutarul mentioned, is that how 0.8 works?

No, but mostly because it was much simplier and more reliable to treat the block as a single data object.  Using this reduced block report to save bandwidth & propagation time has been considered for a long time, but it's not an easy fix.  It requires professionals like Gavin to make it work on the testnet.  Satoshi was not an experienced programmer.
hero member
Activity: 504
Merit: 500
WTF???
Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).


Hmmm, not really the answer to my question. When a block is found, don't you have to download the whole block to see which transactions they included so that you can build the merkle tree?

The removal of redundancy that Jutarul mentioned, is that how 0.8 works?
legendary
Activity: 1596
Merit: 1100
But is not currently how the Satoshi client operates, right? I know we don't have too many people running stock software and huge pools.

Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).

hero member
Activity: 504
Merit: 500
WTF???
The full block download and verification isn't needed to start hashing the next block?
The idea is to only submit the transaction hashes which go into the merkel tree, instead of the transaction data. Because it is likely that you already received and validated the transaction, before you receive a block containing it. This technique removes redundancy from the communication between the node and the network and significantly reduces the time to propagate a valid block.

But is not currently how the Satoshi client operates, right? I know we don't have too many people running stock software and huge pools.
donator
Activity: 994
Merit: 1000
The full block download and verification isn't needed to start hashing the next block?
The idea is to only submit the transaction hashes which go into the merkel tree, instead of the transaction data. Because it is likely that you already received and validated the transaction, before you receive a block containing it. This technique removes redundancy from the communication between the node and the network and significantly reduces the time to propagate a valid block.
hero member
Activity: 504
Merit: 500
WTF???
Ten times the block size seems like scarcity is banished far into the future in one huge jump.

Even just doubling it is a massive increase, especially while blocks are typically still far from full.

Thus to me it seems better never to more than double it in any one jump.

If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.

It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.

The rationale for the 10MB cap is that it would allow us to scale to PayPal tx level right away, and it's arguable that Bitcoin might not actually need more than that. The second rationale is that it would still allow running full nodes by regular people, thus retaining decentralization. Third rationale is that the issue of scarcity can actually be postponed because it won't be an issue for a long time. We're still in the era of large fixed block reward and we are very slowly moving into the "small fixed reward" era.

I have sort of started liking the idea that we would double the block size on each block halving though. The only problem with that is the fact that if the amount of Bitcoin transactions stop growing for some reason not related to this, but there is still very high value (even growing value) in the blockchain, it would lead to the blocksize rising without an increase in transactions. Thus it would lead to lessened protection for the network even though the value in the blockchain might still be very large or even growing.

This is a potential issue with a 10MB limit as well, but I have a hard time believing that. Bitcoin only needs to grow like 20 fold to start pushing the 10MB limit. Pushing it wouldn't be bad either, 70 tx/s should be enough for a lot of things. We could just let free transactions and super low fee transactions not get (fast) confirmations at that point. That is okay I think. The 7 tx/s cap that we have now is simply not going to be enough, that is pretty clear. It's too limiting.

However, I do agree that this whole issue is not something that we need to do now. The blocks do not currently have scarcity to speak of. This is all about creating a plan for what we're going to do in the future. The actual hard fork will happen earliest one year from now.


I'm not saying that 10x is the magical number. I'm saying that both mining and running a full client are still easily done at 10 meg blocks.






If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

The full block download and verification isn't needed to start hashing the next block?
sr. member
Activity: 461
Merit: 251
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.
legendary
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
Ten times the block size seems like scarcity is banished far into the future in one huge jump.

Even just doubling it is a massive increase, especially while blocks are typically still far from full.

Thus to me it seems better never to more than double it in any one jump.

If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.

It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.

The rationale for the 10MB cap is that it would allow us to scale to PayPal tx level right away, and it's arguable that Bitcoin might not actually need more than that. The second rationale is that it would still allow running full nodes by regular people, thus retaining decentralization. Third rationale is that the issue of scarcity can actually be postponed because it won't be an issue for a long time. We're still in the era of large fixed block reward and we are very slowly moving into the "small fixed reward" era.

I have sort of started liking the idea that we would double the block size on each block halving though. The only problem with that is the fact that if the amount of Bitcoin transactions stop growing for some reason not related to this, but there is still very high value (even growing value) in the blockchain, it would lead to the blocksize rising without an increase in transactions. Thus it would lead to lessened protection for the network even though the value in the blockchain might still be very large or even growing.

This is a potential issue with a 10MB limit as well, but I have a hard time believing that. Bitcoin only needs to grow like 20 fold to start pushing the 10MB limit. Pushing it wouldn't be bad either, 70 tx/s should be enough for a lot of things. We could just let free transactions and super low fee transactions not get (fast) confirmations at that point. That is okay I think. The 7 tx/s cap that we have now is simply not going to be enough, that is pretty clear. It's too limiting.

However, I do agree that this whole issue is not something that we need to do now. The blocks do not currently have scarcity to speak of. This is all about creating a plan for what we're going to do in the future. The actual hard fork will happen earliest one year from now.
hero member
Activity: 504
Merit: 500
WTF???
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you for that so much. That is most definitely the clearest explanation I've seen yet. Even if miners sacrificed a few more seconds so the bandwidth requirements weren't as high, you still would only halve the connection speed, making a 10 meg block potentially realistic today.

Something to remember too, is that we are not packing every block with a meg currently. Most are a fraction of that. The maximum block size would be needed for peak TPS until a later date when the transactions get slower.

Something please check my math though, I'll update it on just the 10 meg block.

At 0.0005 minimum transaction fee, on blockchain.info I'm seeing about 0.5 BTC per 250 KB of block size. That would be an additional 20 BTC per block with a fully loaded 10 meg block. Is that not a decent 600USD reason to upgrade your internet connection?

And for those like Hazek who simply want to verify the rules of bitcoin, the requirements of bandwidth for a 10 meg block are any first world DSL connection with plenty to spare.
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
legendary
Activity: 924
Merit: 1004
Firstbits: 1pirata
One of the reasons why high bandwidth is required is because you need it in bursts every 10mins(on average).

Sending blocks with only hashes of TX is a start.

Any other optimisations which reduce the download size within that 6s period either by pre-downloading known information or by downloading unimportant data(data not needed to begin mining next block) later once mining has commenced will help to drastically reduce bandwidth requirements.

After all the only real discovery in a new block is the nonce value.

Indeed.  If the block propagation can remove the transaction data, and simply expect all mining clients to already have that data in their queue; then the actual block transmitted can be reduced to the 80 byte header and the myrkel hash tree.  That should make the 6 second baseline trivial for any common broadband connection well into the future.

If you think about this solution for enough time you realize it can be messy. Why? Because nodes are independent and they listen transactions on their own, so you can really mine a block and be sure everybody has the transactions in their memory pool after those same transactions have a certain age, more than 10 minutes for example, if not the orphan rate would be very high with clients all around trying to untangle new blocks with their corresponding transactions from memory. Not a nice picture...
legendary
Activity: 1708
Merit: 1010

The other miners aren't robots; they can anticipate such a problem just like retep did, and take pains to ensure it does not happen. They could ostracize pools that allow unreasonable blocksizes, etc. It feels like the dynamic human factor is being ignored.

Thank you!  Finally, someone who understands!
legendary
Activity: 1708
Merit: 1010
One of the reasons why high bandwidth is required is because you need it in bursts every 10mins(on average).

Sending blocks with only hashes of TX is a start.

Any other optimisations which reduce the download size within that 6s period either by pre-downloading known information or by downloading unimportant data(data not needed to begin mining next block) later once mining has commenced will help to drastically reduce bandwidth requirements.

After all the only real discovery in a new block is the nonce value.

Indeed.  If the block propagation can remove the transaction data, and simply expect all mining clients to already have that data in their queue; then the actual block transmitted can be reduced to the 80 byte header and the myrkel hash tree.  That should make the 6 second baseline trivial for any common broadband connection well into the future.
sr. member
Activity: 461
Merit: 251
All the important protocol rules can be enforced by SPV clients if support for "error messages" is added to the network.  This is described here: https://bitcointalksearch.org/topic/way-for-spv-clients-to-cheaply-prove-when-a-miner-has-cheated-on-his-fee-reward-131493

The trust model relies on information being hard to suppress, which is the same as the trust model nearly everyone running a full node is subscribing to in practice anyway by not personally vetting the source code.

Of course with little expenditure most people will still be able to run massively scaled full nodes, anyway, once all the proposed optimizations are implemented.  But it's at least nice to know even the smart phone clients can "have a vote".

If the transaction rate does reach such huge levels, then it strikes me that the hashing power funding problem has been solved automatically - all those default but technically optional half-cent transaction fees sure would add up.

It also strikes me as unlikely that a block size limit would actually achieve an optimal amount of hashing power.  Even in the case where most users have been driven off the blockchain - and some off of Bitcoin entirely - why should it?  Why shouldn't we just expect Ripple-like trust networks to form between the Chaum banks, and blockchain clearing to happen infrequently enough so as to provide an inconsequential amount of fees to miners?  What if no matter what kind of fake scarcity is built into the blockchain, transaction fees are driven somewhere around the marginal transaction fee of all possible alternatives?
legendary
Activity: 924
Merit: 1004
Firstbits: 1pirata
So let me see if I have the math right according to the scalability article on Bitcoin.

Visa does 2000tps we can do 7....

Why can't people stop comparing Bitcoin with whatever payment system they can come up with?

It will never be used as the only existent system so it will always balance itself out with transaction number/block depending on the fees charged at any time. Btw, increasing block size over the 1mb hard-limit will have a negative impact on fees. Miners will not be able to increase them in a natural way defending their self interest and block space scarcity. I really think Satoshi saw this coming when thinking how to implement the halving block reward, hence the 1mb block limit will allow almost 90% of the network to stay in sync while opening a new market for transaction inclusion in the blockchain. Genius!
Pages:
Jump to: