Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 14. (Read 71590 times)

legendary
Activity: 1064
Merit: 1001
I think that this might be a great way to move the soft limit, but I still think we need to have a (much higher) hard limit.

Did the earlier posts not sink in? The soft limit is not a proper limit, it is merely a thin road block designed to function as an early warning system. There's no "moving the soft limit." See my earlier post about analyzing the block chain to see the effect of the soft limit on miner behavior. As far as you are concerned, you should pretend as if the soft limit doesn't exist.

As for having a much higher hard limit, again did you not read the earlier posts? A 10 megabyte hard limit (which isn't even "much higher" in your terms) would raise the minimum bandwidth requirements to 17Mbps. Do you have any idea how many miners would be knocked off the network? In an earlier post you were just explaining the limitations of international bandwidth and also that solo miners would like to actually use their internet connections while they are mining. How can you then claim that we need a much higher hard limit, with its accompanying much higher minimum bandwidth requirements? Go back and re-read the chapter and then try answering the test questions again.
legendary
Activity: 1708
Merit: 1010
I don't know that I'd agree that a 1% change per difficulty adjustment is going to be enough.

All of the baked in constants are of course subject to tuning before final implementation. I used 90% and 1% as examples.

Granted.

Upon further consideration, I think that this might be a great way to move the soft limit, but I still think we need to have a (much higher) hard limit.  For the reasons that I stated before, and others, a hard limit removes many of the incentives for big players to engage in anti-competitive activities; since it puts a very real limit to the long term effectiveness of such underhanded methods.
legendary
Activity: 1064
Merit: 1001
I don't know that I'd agree that a 1% change per difficulty adjustment is going to be enough.

All of the baked in constants are of course subject to tuning before final implementation. I used 90% and 1% as examples.

But remember that an increases in the blocksize translates directly into an increase in the minimum bandwidth requirements. Do you really want the minimum bandwidth requirement to double in one year? Even at 1% that still means that with a steady stream of successful votes, the minimum bandwidth requirement will increase by 12% per year. That's a lot.

Even a one time adjustment of the maximum block size to 2 megabytes would raise the bandwidth requirements to 3.4 Mbps. You're okay with this? If implemented today (assuming there was sufficient transaction volume) it would certainly preclude a large body of independent miners. Myself included (if the Jalapeno ever ships).

The penalty for increasing the maximum block size too slowly is that miners have large fees for a while. The penalty for increasing it too quickly is a loss of network hashing power and a centralization of mining power. Better to be conservative and adjust slowly.

...
This is such a contrived situation that I have to reject it as a strawman argument.

It is not contrived, it is precisely what would happen if the minimum bandwidth requirements were jacked up by a factor of 100 (which happens when you increase the maximum block size to 100 megabytes). I think he's just explaining it in terms that a non-technical person might understand.
legendary
Activity: 1708
Merit: 1010
Okay but if you have somewhere in the neighborhood of three megaminers, of a scale about like one would get by Google buying deepbit, Paypal buying another, Amazon buying another, etcetera, Eligius either heaviliy sponsored by whatever coalition of bible-thumpers can add up to such scales or marginalised due to being actually trivially small in the new larger scale scheme of things, the only propagation that matters is the direct high bandwidth pipes between these superpowers. The 49% or less - the whole rest of humanity - gets a crumb from the superpowers' table less than 50% of the time and even within that demographic the bigger fish in that smaller pond can hook up directly to each other, and maybe even try to co-operate in finding sneaky ways to get more peeks faster at what the superpowers are actually working on in a given one block period, so quite possibly half or more of the 49% also are not affected by the relaying decisions of mere non-mining nodes.

Lets think for a moment of all those Jalapinos that once upon a time sounded like they could decentralise hashing power into every home. Are all those coffee-warmers going to have to be moved out from under livingroom or home-office coffeecups, co-located in data centres somewhere, if they are to be able to keep up with actually validating? They would, if they didn't actually validate, be merely rubber-stamping on behalf of someone else...

-MarkM-


This is such a contrived situation that I have to reject it as a strawman argument.
legendary
Activity: 1708
Merit: 1010
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.

Given this bandwidth limitation, here's a new proposal for the adjustment of maximum block size:

1) A boolean flag is added to each block. The flag represents the block solver's yes or no vote for increasing the block size. The independent miner or mining pool sets this flag according to their preference for an increase.

2) Every time the difficulty is adjusted, the number of yes votes is counted from the last adjustment. If the number of yes votes is greater than 90%, then the block size is increased by 1%. Both percentages are baked-in constants, requiring a hard fork to change.



Interesting, an intergrated voting method.  I can't say which direction that I'd consider it more likely that most miners would prefer, which is a good sign that it's a viable solution.  However, I don't know that I'd agree that a 1% change per difficulty adjustment is going to be enough.  After all, there will only be roughly 26 such adjustments in one year.  Taking into account the compounding of prior adjustments, off of the top of my head I'd guess that it's not possible for the blocksize to increase by more than 35% in one calender year.  And only then if all the vote adjustments move in the same direction.  I'd say a better number would be 5% per adjustment if the vote was a supermajority (i.e. 80% of votes in one direction) or 2.5% (or so) for a simple majority.  This would, at least, make it possible for the blocksize to double in a calender year, even if it doesn't make such an outcome likely.
legendary
Activity: 1708
Merit: 1010
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.


And practically speaking, most people would like to be able to use their broadband connection for other things.  Offically, my cable service is a 10Mbit/sec connection, but practically I can't sustain more than 1.5Mbps to any single IP address.  If I let my bittorrent client full access, it peaks around 1.5Mbps but averages about half that.  Your bandwidth needs are further complicated by the fact that once you have downloaded and verified a block, you're expected to then upload that block to your connected peers.  Since most broadband connections are half as fast uphill, this limits your connected peer's download to half of what you're offically capable of.  In conclusion, anything beyond a 1MB limit is too resource intensive for the at-home full client to be able to keep up with at present, but with the advancements in Internet tech and infrastructure, we can assume that an at home connection would be able to manage a 10-20 MB block within a decade.  We would want to aim for the hard limit to be at least this high, while continuing to depend upon soft limits in the meantime.

Quote

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

International bandwidth is much more complicated than this, with respect to a p2p network such as Bitcoin.  Much like bittorrent, the bandwidth for Singapore is functionally replicated for all of the Bitcoin clients in all of Singapore, so it's not true that the international connection is divided among the many peers all trying to download the same block, it's effectively shared data as if there were a locally available seed on Bittorrent.  Granted, it's not the same as saying that all of Singapore could be regarded as one node, and then the block replicates across the entire nation-state from one copy; but once one copy of the block has made it across the bottleneck, the local bandwidth effectively becomes the limiting factor to propagation.
legendary
Activity: 1064
Merit: 1001
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.

Given this bandwidth limitation, here's a new proposal for the adjustment of maximum block size:

1) A boolean flag is added to each block. The flag represents the block solver's yes or no vote for increasing the block size. The independent miner or mining pool sets this flag according to their preference for an increase.

2) Every time the difficulty is adjusted, the number of yes votes is counted from the last adjustment. If the number of yes votes is greater than 90%, then the block size is increased by 1%. Both percentages are baked-in constants, requiring a hard fork to change.

full member
Activity: 150
Merit: 100
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.
legendary
Activity: 2940
Merit: 1090
Okay but if you have somewhere in the neighborhood of three megaminers, of a scale about like one would get by Google buying deepbit, Paypal buying another, Amazon buying another, etcetera, Eligius either heaviliy sponsored by whatever coalition of bible-thumpers can add up to such scales or marginalised due to being actually trivially small in the new larger scale scheme of things, the only propagation that matters is the direct high bandwidth pipes between these superpowers. The 49% or less - the whole rest of humanity - gets a crumb from the superpowers' table less than 50% of the time and even within that demographic the bigger fish in that smaller pond can hook up directly to each other, and maybe even try to co-operate in finding sneaky ways to get more peeks faster at what the superpowers are actually working on in a given one block period, so quite possibly half or more of the 49% also are not affected by the relaying decisions of mere non-mining nodes.

Lets think for a moment of all those Jalapinos that once upon a time sounded like they could decentralise hashing power into every home. Are all those coffee-warmers going to have to be moved out from under livingroom or home-office coffeecups, co-located in data centres somewhere, if they are to be able to keep up with actually validating? They would, if they didn't actually validate, be merely rubber-stamping on behalf of someone else...

-MarkM-
full member
Activity: 150
Merit: 100
So let me see if I have the math right according to the scalability article on Bitcoin.

Visa does 2000tps we can do 7. If Bitcoin scaled up to Visa, the wiki says we would need around an 8 megabits/second connection. That would create a block size of approximately 500 megs each. Am I hitting that number right? A home DSL connection would choke up with that and bandwidth caps at home would easily be exceeded. Nothing that serious miners or colocation couldn't handle.

CPU isn't an issue. At 72 GB a day serious pruning would have to be discussed.

At the minimum fee of 0.0005, that's over 600 BTC per block in fees if we increase the limit to 500 megs per block. But the discussion is we shouldn't make big blocks, so the fees increase for the miners? It seems like if you have a million transactions in a block, that there could potentially be more fees. Of course they are smaller fees, but there are much more of them.

As of today, there should be no reason that we could not move the maximum block size to 10x what it currently is and we would probably good for a very long time being able to process more than paypal can. When that wall is approaching we can discuss it again like Satoshi suggested.

No that is incorrect. With a 8mbit/s connection, you will merely be able to download blocks at the rate they are created, it will be impossible for you to mine.

Simplified:
500MB / 800KB/s = 625s ~ 10min
By the time you are done downloading the latest block and begin working on the next block, someone else has already found the next block.
legendary
Activity: 1708
Merit: 1010
I don't understand. XMiningPool Inc. mines a block at 600kb. You ignore it. So in your block chain you go from 250,000 to 250,002?

No, I'm not saying that I actually reject an otherwise valid block, because it's bigger than I like.  That would certainly force a hard fork.  I'm saying that, as a client, I have a rule that does not forward blocks over my chosen limit to my other peers. If I'm alone, or otherwise in the minority on the issue, I've caused zero harm to anyone while managing to save myself some bandwidth.  However, if any significant minority of the total number of full clients that participate in the network, whether or not they mine themselves, also adopt this rule; the effective result is that this creates a delay in the propagation of overlarge blocks, effectively granting a bandwidth advantage to miners who choose to stay within the soft limit.  Thus, miners who create blocks over the limit effectively have their hashrate handicapped by delays in propagation.  As for myself, even if the block is overlarge, if it passes the standard "hard" block validity rules, it still does not get rejected from my own blockchain, as that would prevent myself from participating in the network in the event that said overlarge block is still the one that is built upon.  I'm doing nothing with my soft limit rule other than choosing not to forward the block.  

My highlighted sentence above implies that as the bandwidth requirements increase for full nodes, someone is going to implement something functionally similar anyway, in order to save bandwidth.  Worse would be for some programmer to take the easy route, and simply modify the code so that a full client quietly never forwards a valid block to it's peers, and it's peers never notice.
hero member
Activity: 504
Merit: 500
WTF???
I don't understand. XMiningPool Inc. mines a block at 600kb. You ignore it. So in your block chain you go from 250,000 to 250,002?
legendary
Activity: 1708
Merit: 1010
I've done nothing of the sort.  If I add a rule to my client that it quietly drops blocks that are over 250Kb, what have I done to you?

Nothing, but you've kicked yourself off the network (until a majority of mining power + a significant amount of full nodes on the network implement your rule too.

No, I haven't.  You don't even know that I dropped your block.  I can participate as a node just fine, just not as a miner.  It's a balance of forces.
hero member
Activity: 504
Merit: 500
WTF???
So let me see if I have the math right according to the scalability article on Bitcoin.

Visa does 2000tps we can do 7. If Bitcoin scaled up to Visa, the wiki says we would need around an 8 megabits/second connection. That would create a block size of approximately 500 megs each. Am I hitting that number right? A home DSL connection would choke up with that and bandwidth caps at home would easily be exceeded. Nothing that serious miners or colocation couldn't handle.

CPU isn't an issue. At 72 GB a day serious pruning would have to be discussed.

At the minimum fee of 0.0005, that's over 600 BTC per block in fees if we increase the limit to 500 megs per block. But the discussion is we shouldn't make big blocks, so the fees increase for the miners? It seems like if you have a million transactions in a block, that there could potentially be more fees. Of course they are smaller fees, but there are much more of them.

As of today, there should be no reason that we could not move the maximum block size to 10x what it currently is and we would probably good for a very long time being able to process more than paypal can. When that wall is approaching we can discuss it again like Satoshi suggested.
legendary
Activity: 1064
Merit: 1001
...Leave bitcoin alone or I'm leaving. That's hardly constructive....

Actually, it is VERY constructive! Voluntary usage of Bitcoin is one of the core values. The success of any forking change will be determined by the fraction of users who adopt it. Anything that might cause a significant fraction of users, especially miners, to refuse to adopt the new system is most likely a non-starter.
hero member
Activity: 504
Merit: 500
WTF???
Halving the block reward could lead to issues. Simply having it based on time is a totally arbitrary decision.

That's right, and there is already a discussion of the issue that the block subsidy schedule might cause. Specifically, if the transaction fees and/or the value of Bitcoin do not rise to maintain the profitability of mining, the network hash rate will permanently decrease instead of increase.

misterbigg, if I've offended you, I'm sorry. Your posts in here have been about the best ones that I really have enjoyed of those tilting towards not increasing it. That's a very poor way of explaining your arguments, but I am greatful for your input on everything. It was actually your posts that I was reading where I found the Ms. Lawrence. Seemed like an excellent response to someone coming in and saying leave it alone with no explanation. Not too far off from what I feel like Havock is saying. Leave bitcoin alone or I'm leaving. That's hardly constructive.

Regardless of the outcome, there will surely be people upset and probably some that will abandon bitcoin based on the decision.
legendary
Activity: 1064
Merit: 1001
Halving the block reward could lead to issues. Simply having it based on time is a totally arbitrary decision.

That's right, and there is already a discussion of the issue that the block subsidy schedule might cause. Specifically, if the transaction fees and/or the value of Bitcoin do not rise to maintain the profitability of mining, the network hash rate will permanently decrease instead of increase.
hero member
Activity: 504
Merit: 500
WTF???
I don't like that solution either. It's not linked to actual usage in any way and could lead to issues. Simply doubling it based on time is a totally arbitrary decision.

Halving the block reward could lead to issues. Simply having it based on time is a totally arbitrary decision.
legendary
Activity: 1064
Merit: 1001
We have no way of knowing how much protection is enough. We can't define it in a fixed fashion.

If by protection you mean hash rate, I think there's only one right answer, "as much as possible." Someone said earlier that Bitcoin should be thought of as the block chain with the largest amount of hashing power of any block chain and I think that's the right way of looking at it. Any less, and it is vulnerable to a competing block chain that offers more security. Or it is vulnerable to an attack. I believe that any increase in the maximum block size is going to reduce the maximum hashing power that can be reached at any given time, because increasing the block size will inevitably lead to a decrease in fees, can anyone provide a counter argument?

This having been said, if the community decides that Bitcoin is best served by trading away some of that maximum hashing power in exchange for an increase in the rate of transactions, the maximum block size must be increased. I said earlier that these two guidelines should apply for increasing the block size:

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)
2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.
3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.

If the idea of having a constant block reward forever bothers you, then here's another scheme:

1) Block size adjustments happen at the same time that network difficulty adjusts
2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.
3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.

As I said, these are just building blocks to stimulate ideas. Both of them directly respond to scarcity, and only look at information in the blockchain (easy consensus).

Can anyone improve on these ideas or offer a better alternative?

Does anyone think that letting miners produce blocks of arbitrary size and letting the network battle it out for them is a good idea? Will this produce more orphans and waste more of that precious bandwidth that everyone is all hopped up about? Is this better than either of the two schemes that I described above? If so, how?
legendary
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
I don't like that solution either. It's not linked to actual usage in any way and could lead to issues. Simply doubling it based on time is a totally arbitrary decision.
Pages:
Jump to: