Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 12. (Read 71612 times)

legendary
Activity: 1708
Merit: 1010

Was this earlier post insufficient?

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.


No, because it only addresses the effects of one variable resource, thus assuming that all other variables would remain either static or significantly independent from the effects of this variable so as to be ignored.  This might be a valid assumption, but I cannot accept that as a given.  The core purpose of economic analysis is to be able to predict the effects of changes to all the variables, not just those you assume are dominant.  The unseen is usually of greater net effect on the outcome than the seen.
legendary
Activity: 2618
Merit: 1007
I don't see why 90% would be required and not just the majority of one of the 3 options (1% larger, same, 1% smaller). To block a change in the 90% model you need "only" 10% of the net hash rate. To have something you want to see in a ">=33.4%" version, you need more than 3 times the hash power and other can still force the opposite change by not voting uniformly.

To abstain, you'd just create blocks with alternating options so it evens out or you can add "abstention" as 4th option.
hero member
Activity: 501
Merit: 500
Given this bandwidth limitation, here's a new proposal for the adjustment of maximum block size:

1) A boolean flag is added to each block. The flag represents the block solver's yes or no vote for increasing the block size. The independent miner or mining pool sets this flag according to their preference for an increase.

2) Every time the difficulty is adjusted, the number of yes votes is counted from the last adjustment. If the number of yes votes is greater than 90%, then the block size is increased by 1%. Both percentages are baked-in constants, requiring a hard fork to change.


I like this proposal. If we want to waste another bit per block for the vote, we could also have the options "decrease max block size" and "ignore my vote" (the other options being "increase" and "keep"). I think having a decrease option would be an elegant addition, in case of unexpected dynamic effects in the relatively distant future.

The thing I like most about this proposal is that it would only need one hardfork, and it could actually be implemented in a way that is relatively soft as hard forks go. Just count blocks without the vote field as "keep" votes. Fork can only happen after there are over 90% "increase" votes in the last adjustment period. Hm, maybe it should actually have a lag of one adjustment period (or a fixed number of blocks), in case of a chain reorg event.
legendary
Activity: 1078
Merit: 1003
Technomage, I just wonder, if block space isn't scarce causing the fees to stay puny, just how are miners going to get funded once block reward gets super small or zero?

Indeed. That is why I've advocated for retaining some form of scarcity for the block size. A model with no scarcity could be a disaster. Keeping it as is, and letting Bitcoin slide into a system where user's only function is to keep miners very rich, and pay $20 for a transaction, is quite unacceptable though. There has to be a middle ground. I feel a large majority of userbase will support a middle ground because they want to continue to use Bitcoin more or less as they do now. Perhaps not exactly like now, but more or less.

I already said in this thread somewhere on page 5 or 6 that I'm not opposed to a compromise. But it has to be a compromise that isn't at the expense of what I call my Bitcoin sovereignty. I don't care if you increase the block size limit as long as this wont immediately or down the road mean that I cannot personally validate rules miners validate anymore and that I must give my explicit consent to a rule change.

I think it would be a good exercise to further explore the likely game play scenarios if the block size limit is doubled(or some higher factor increase) every time the block reward is halved and how long that would take for Bitcoin to be able to handle Paypal amount of transactions while still keep the block space scarce and the blockchain at a reasonable size.
legendary
Activity: 2940
Merit: 1090
Ten times the block size seems like scarcity is banished far into the future in one huge jump.

Even just doubling it is a massive increase, especially while blocks are typically still far from full.

Thus to me it seems better never to more than double it in any one jump.

If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.

It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.

-MarkM-
legendary
Activity: 2618
Merit: 1007
After all the only real discovery in a new block is the nonce value.
Well, that's what one is mining for, but also the timestamp is far from given (it has to be within a certain range) and some other fields can also be chosen by the miner. So while the nonce is the real "secret", there's still some local information that's not too easy to know.
Miners might be able to share these other fields between each other though (but is trying out a handful of timestamps really faster than just getting a header instead of a nonce...?).

With only a hash of the previous block header, one could already try for a few seconds to mine an empty block, then with the merkle root + transaction hashes coming in you can start to check out which transactions you already can forget about/move into that previous block and you can already say which transactions you know of are for sure not yet in a block (you can't tell if they are still valid though, as one of the unknown transaction hashes might have changed something). Then you get the missing transactions as well and then you can start including the remaining valid transactions. You can do so earlier too, since you are running on an empty block anyways - so it might be better to "risk" creating a block that might turn out valid than to do nothing.
Currently I guess this is done more or less at the same time, as there is no real issue with getting a block quickly.

As long as miners can have custom clients and relay blocks directly between them (which they should, to reduce stales), having rules that make blocks propagate slower through the network is a "fix" on the wrong end.
legendary
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
Technomage, I just wonder, if block space isn't scarce causing the fees to stay puny, just how are miners going to get funded once block reward gets super small or zero?

Indeed. That is why I've advocated for retaining some form of scarcity for the block size. A model with no scarcity could be a disaster. Keeping it as is, and letting Bitcoin slide into a system where user's only function is to keep miners very rich, and pay $20 for a transaction, is quite unacceptable though. There has to be a middle ground. I feel a large majority of userbase will support a middle ground because they want to continue to use Bitcoin more or less as they do now. Perhaps not exactly like now, but more or less.
legendary
Activity: 1078
Merit: 1003
Technomage, I just wonder, if block space isn't scarce causing the fees to stay puny, just how are miners going to get funded once block reward gets super small or zero?
legendary
Activity: 1036
Merit: 1000
For clarification, what happens if a single high-bandwidth miner were to actually start creating huge blocks that push out half the other miners. What would be the reaction? Is there really nothing the cut-off half could do? And even if they could do nothing, why would the surviving half go along with this, knowing that this spirals inevitably higher, leaving them out?

This is another apparent contradiction: if the scenario in the OP is really a problem, why would even the upper-tier miners go along with unreasonable blocksize inflation, knowing half of them could be next to fall? It seems we must look not only at the incentives of the highest-bandwidth miners, but those of all miners - or at least the top 50% whose cooperation they apparently need.

The other miners aren't robots; they can anticipate such a problem just like retep did, and take pains to ensure it does not happen. They could ostracize pools that allow unreasonable blocksizes, etc. It feels like the dynamic human factor is being ignored.
legendary
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
Based on the calculations of how much bandwidth a certain block size would require, and how many transactions that size gives us, I would say that for now a 10MB limit would be quite enough. That would give us as much transactions as PayPal has, and it's arguable that we might not even need more than that.

There is no sign of Bitcoin being widely adopted in brick & mortar, if it only reaches PayPal level adoption as a payment system, more might not actually be necessary. Bitcoin is a cumbersome system for planning on being able to cheaply send any and all transactions, that should not be a valid goal at any point.

The 10MB limit would make it impossible for some people to mine, but that is life. Mining or running a full node is already restricted for a very large portion of the world's people. The only thing we really need to worry about when tweaking the block size is that it's unlikely for it to lead to a mining monopoly, and that there remains some scarcity.

A 10MB limit for now would probably solve it, I think that if ever more was needed, it would actually have to put to the test. With this I mean just keep it at 10MB and see what happens. Maybe we already need to to this, keep it at 1MB and see what happens.

I'm still all for a some sort of a floating max if a very well thought of model can be agreed upon, but otherwise I'd eventually just change it to 10MB and let it be.
legendary
Activity: 1036
Merit: 1000
No, because the hard limit is a hard limit on how large of a "little guy" they could push out, and that limit is, basically, guys so frikkin tiny that pushing them out gains so little the effect is lost in the noise of more not quite that extremely tiny guys coming online all the time.

Basically you can't kick out the little guy, given the current hard limit; you can only kick out the trivially tiny guy who isn't even worth the trouble of kicking out.

OK, that makes sense. I withdraw the argument.
legendary
Activity: 2940
Merit: 1090
Either way, the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power, *not* 100%

Anticipating what humans would be motivated to do in dynamic situations involving other humans is notoriously difficult. If there is already a soft limit in place below the hard one, yet we do not have miners padding their blocks to shut out the little guys, doesn't that immediately call the existence of this incentive into question?

No, because the hard limit is a hard limit on how large of a "little guy" they could push out, and that limit is, basically, guys so frikkin tiny that pushing them out gains so little the effect is lost in the noise of more not quite that extremely tiny guys coming online all the time.

Basically you can't kick out the little guy, given the current hard limit; you can only kick out the trivially tiny guy who isn't even worth the trouble of kicking out.

-MarkM-
legendary
Activity: 1036
Merit: 1000
Either way, the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power, *not* 100%

Anticipating what humans would be motivated to do in dynamic situations involving other humans is notoriously difficult. If there is already a soft limit in place below the hard one, yet we do not have miners padding their blocks to shut out the little guys, doesn't that immediately call the existence of this incentive into question?

And if the reason no one does this is because the soft limit is so effective, doesn't that then suggest that hard limits are unnecessary?

Either "the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power," or they are not. At least prima facie, it looks like you're trying to argue both halves of a contradiction.
legendary
Activity: 2940
Merit: 1090
Could we float the blocksize based on 'network centralization'? If the last x blocks were all mined by the same few large entities then the blocksize decreases which leads to transaction fees rising, bandwidth costs decreasing, and hopefully more small miners joining in.

Only if all the folk we are worried about clearly identify blocks they make as having been made by them.

Absent voluntary identifying of themselves who made a block is unknown, they are pseudonymous just like any other bitcoin-address.

And even if they do vokuntarily identify blocks they make that they want us to know were made by them that would not force them not to make any other blocks they don't tell us were theirs.

-MarkM-
sr. member
Activity: 604
Merit: 250
Could we float the blocksize based on 'network centralization'? If the last x blocks were all mined by the same few large entities then the blocksize decreases which leads to transaction fees rising, bandwidth costs decreasing, and hopefully more small miners joining in.
legendary
Activity: 1078
Merit: 1003
I don't like that solution either. It's not linked to actual usage in any way and could lead to issues. Simply doubling it based on time is a totally arbitrary decision.

All decisions about rules in Bitcoin are arbitrary, I really don't understand why you keep throwing that around as if it's a reason not to consider one.
It is not even arbitrary in itself, once you already accept halving the block subsidy. It is simply an attempt to give the miners more space to sell to make up for lowering their subsidy.

I like that reasoning.
legendary
Activity: 2940
Merit: 1090
I don't like that solution either. It's not linked to actual usage in any way and could lead to issues. Simply doubling it based on time is a totally arbitrary decision.

All decisions about rules in Bitcoin are arbitrary, I really don't understand why you keep throwing that around as if it's a reason not to consider one.
It is not even arbitrary in itself, once you already accept halving the block subsidy. It is simply an attempt to give the miners more space to sell to make up for lowering their subsidy.

Now if you want to argue that changing the subsidy over time is arbitrary, leading to the limit of 21,000,000 coins being arbitrary, well yes you've got me there. But, the actual total number of coins is irrelevant as no matter how many coins "100% of all bitcoins" happens to be it remains "100% of all bitcoins" so how many that is in various units of measure other than "fractions of the whole" it happens to be is kind of irrelevant (and making it look different cosmetically like calling it millis or macros or megas or picos or whatever is purely a user interface design issue).

-MarkM-
full member
Activity: 150
Merit: 100
One of the reasons why high bandwidth is required is because you need it in bursts every 10mins(on average).

Sending blocks with only hashes of TX is a start.

Any other optimisations which reduce the download size within that 6s period either by pre-downloading known information or by downloading unimportant data(data not needed to begin mining next block) later once mining has commenced will help to drastically reduce bandwidth requirements.

After all the only real discovery in a new block is the nonce value.
legendary
Activity: 1078
Merit: 1003
I don't like that solution either. It's not linked to actual usage in any way and could lead to issues. Simply doubling it based on time is a totally arbitrary decision.

All decisions about rules in Bitcoin are arbitrary, I really don't understand why you keep throwing that around as if it's a reason not to consider one.
member
Activity: 71
Merit: 10
It sounds like the core problem is a bottleneck in getting all of the transactions mined within the time desired and within bandwidth constraints of typical miners.  Faster transactions via large blocks may result in a few mega-bandwidth miners, but limiting bandwidth via small blocks slows transactions and reduces usefulness of bitcoin.

I know this is radical, but perhaps the answer is parallel processing.

a) Sort all of the existing and future bitcoins into two (or more) parallel and independent sub-chains using some existing sort-able attribute.
b) Clients operate on both sub-chains, but this is completely hidden from users.  A bitcoin is still a bitcoin.
c) When Bob send coins to Alice, Bob's client attempts to select a bundle of coins from his wallent that are all from the same sub-chain.  If not possible, it will use coins from both chains to make the payment.
d) Now that there are two chains, two miners can work in parallel, each on a different sub-chain.  Assuming the process in (c) is usually successful, Bob's coins to Alice will be a transaction on just one sub-chain.  Overall, this makes the blocks on each subchain much smaller, avoiding the bottleneck without increasing bandwidth.

notes: 
-coins never change sub-chains.  These are separate chains masked by the client.
-new coins would be equally distributed among the chains.  21 million overall cap remains.
-more than 2 subchains could be used.
-clients would have to split and re-assemble payments via multiple sub-chains when a pure bundle cannot be made. micro-payments could usually be from a single sub-chain

I am sure there are good reasons why this cant work, and perhaps its a coding nightmare, but maybe it will stimulate other lines of thought
Pages:
Jump to: