Pages:
Author

Topic: Block size limit automatic adjustment - page 3. (Read 14503 times)

donator
Activity: 826
Merit: 1041
Here's an idea that might be dismissed as stupid-simple, but sometimes stupid-simple ideas work really well.

How about: The maximum block size equals the higher of: (a) the current hard-coded maximum block size, and (b) 'difficulty' bytes.
legendary
Activity: 1708
Merit: 1007
Okay great. My understanding from reading this thread was that it was not so 'easily' removed;

It could be relatively easy now or even in some months from now. But once bitcoin goes mainstream, doing such a change will not be that easy, mainly due to the coordination effort. And the more popular bitcoin gets, the harder such change becomes, that's why I still think it should be done just once. And to do it just once, a self-adjusting rule should be created... just raising a constant is bad for several reasons (multiple backward incompatible changes, gives more space for spammer-miners to abuse on, risk of dropping the fee value and by consequence the difficulty factor in the future etc)

I have already laid out my plan for a self adjusting limit, based on the average number of blocks a fee paying transaction spends in the average queue, but with some fudge factor to allow for different miners having slightly different averages across the network.  To account for fee rates, the average could be weighted by the amount of the fee in such a manner that a fee larger than the minimum gets affects the average more than the average transaction fee.

The space available for free transactions could be set static, or as a percentage of the moving blocksize.  The blocksize could be adjusted when the difficulty is adjusted, and the winning block first after the change encodes the calculated max limit into that block in some fashion.  If the network accepts that block, then that max limit becomes a fixed limit for another 2015 blocks.
administrator
Activity: 5222
Merit: 13032
What's the current failure mode? What happens if the existing Bitcoin client encounters an over-long block?

It is considered invalid and rejected.
donator
Activity: 826
Merit: 1041
What's the current failure mode? What happens if the existing Bitcoin client encounters an over-long block?
legendary
Activity: 1526
Merit: 1129
Quote
Or like the also professional SMTP servers who have never implemented some stronger authentication system in e-mail transfers? Smiley

Virtually all large, professionally run SMTP networks do authenticate their mail, as far as I know. We track how much mail is authenticated coming into the Google network (ie: gmail consumer/business editions), and it's pretty high. There's a long tail of home-run SMTP servers that will never be upgraded but they also don't represent a whole lot of users.

Quote
And it's not an issue only to miners, every full client performs block validations. I don't think miners will be the only ones running full clients. Those who serve the lightweight clients for ex., they will need to be full clients.

Yeah, we might want to change that :-) I'll ask Gavin about it next time I see him on IRC. I think non-miners don't need to check the block size even if they are full nodes as an attempt to explode the size of the chain maliciously will be overridden by genuine miners pretty quickly. There's a risk of temporarily flooding the broadcast network with a gigantic block but it's not an easy attack to pull off. Nodes can prune blocks on side chains after a while so the increase in storage required would be temporary. If anyone ever did it, a patch to forcibly delete a block from storage would be written pretty fast.
legendary
Activity: 1106
Merit: 1004
Block size limits are only relevant to miners, as it's they who decide whether to "accept" a block by building up on it or not. Most users will end up on lightweight clients which don't need to check the block size. So as long as mining consolidates around professionals who communicate and keep up, there probably won't be a "doomsday" scenario.

Like the professional ISPs who have waited until the last minute to migrate from IPv4 to IPv6 - not to mention that many have not yet migrated? Or like the also professional SMTP servers who have never implemented some stronger authentication system in e-mail transfers? Smiley

I'm not saying it's impossible, neither that there will be a "doomsday" due to this. I'm just saying it's a problem that will need to be fixed someday, and the earlier it's done, the easier it is. If it's done too much later, it may provoke avoidable problems like long chain splits, people not understanding why their bitcoin is not working anymore etc.

And it's not an issue only to miners, every full client performs block validations. I don't think miners will be the only ones running full clients. Those who serve the lightweight clients for ex., they will need to be full clients.
legendary
Activity: 1526
Merit: 1129
Block size limits are only relevant to miners, as it's they who decide whether to "accept" a block by building up on it or not. Most users will end up on lightweight clients which don't need to check the block size. So as long as mining consolidates around professionals who communicate and keep up, there probably won't be a "doomsday" scenario.

legendary
Activity: 1106
Merit: 1004
Okay great. My understanding from reading this thread was that it was not so 'easily' removed;

It could be relatively easy now or even in some months from now. But once bitcoin goes mainstream, doing such a change will not be that easy, mainly due to the coordination effort. And the more popular bitcoin gets, the harder such change becomes, that's why I still think it should be done just once. And to do it just once, a self-adjusting rule should be created... just raising a constant is bad for several reasons (multiple backward incompatible changes, gives more space for spammer-miners to abuse on, risk of dropping the fee value and by consequence the difficulty factor in the future etc)
legendary
Activity: 1708
Merit: 1007
Okay great. My understanding from reading this thread was that it was not so 'easily' removed; Having to get every client updated at basically the same time.

Good to read otherwise.


No, not at the same time.  Just before hitting that limit were to become a regular event.  We could change that rule in the next vanilla client release, so long as everyone agreed that we should, and voiced consent by downloading the new client.  At the current rate we still have months, if not years, before hitting that limit regularly.  As far as I know, we have never come close to it.

If we wait until every other block is hitting that limit; however, implementing such a rule change is going to be problematic.
full member
Activity: 154
Merit: 100
Okay great. My understanding from reading this thread was that it was not so 'easily' removed; Having to get every client updated at basically the same time.

Good to read otherwise.
legendary
Activity: 1708
Merit: 1007
Is the block size limit still a concern?
It never really was a real problem except for future scalability.
Doesn't that by itself make it a real problem?


Well, yes.  But the talk recently is about how maintaining the transaction fees in order to maintain the hasing power of the network, not scalability.  This particular thread was mostly about scalability, and in such a case the max block limit can be raised or removed.  It's only present as a backstop against the possibility of there being some presently unknown exploit that would permit limitless transaction spamming of the blockchain, and not as a means to support the transaction fees.  The short answer to the scalability issue is that it can be easily removed long before the network traffic is high enough that the max block size becomes an actual scalability issue.
full member
Activity: 154
Merit: 100
Is the block size limit still a concern?
It never really was a real problem except for future scalability.
Doesn't that by itself make it a real problem?
legendary
Activity: 1708
Merit: 1007
Has anyone had any further thoughts on dynamic block size limit, in light of the recent slow down of free transactions recently?

Is the block size limit still a concern?


It never really was a real problem except for future scalability.
full member
Activity: 154
Merit: 100
Has anyone had any further thoughts on dynamic block size limit, in light of the recent slow down of free transactions recently?

Is the block size limit still a concern?
legendary
Activity: 1708
Merit: 1007
November 26, 2010, 02:20:38 PM
#43

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.

The main issue raised by this thread does seem like something of concern for the long-term health of this project, as there does seem to be the very real possibility that the current limit is not going to be sufficient for "ordinary" transactions at some point in the future.

I was responding specificly to his concern about transaction fees, not the block size limit.  The hard limit is a real concern, but also one that I imagine has been well considered by others before us.  Did anyone bother to search the archives before diving into this thread?  The block limit exists to prevent spamming from packing the blocks, not support transaction fees.  I think that the recently instituted priority rule does the job at least as well as the hard block limit, but it's not enough on it's own. 
full member
Activity: 224
Merit: 141
November 26, 2010, 01:21:37 PM
#42

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.

The main issue raised by this thread does seem like something of concern for the long-term health of this project, as there does seem to be the very real possibility that the current limit is not going to be sufficient for "ordinary" transactions at some point in the future.  Using examples for data processing rates and transaction rates for other payment processing systems like PayPal, this current limit is not only going to be insufficient but woefully insufficient.  I realize we aren't anywhere near those demand levels, but it still is an issue to think about.

There are also plenty of examples where decisions of software architecture including the use of constants or other features in software architecture have profound real-world impact simply because the software design team has been short sighted and didn't anticipate the future very effectively.  Examples of this include the Y2K bugs, the Unix 2038 date overflow bug (remains to be seen how it will be completely solved), and perhaps most similar to this current situation is the IPv4 address space issue.  There are other instances where a coded constant of some kind also can come up and bite end-users in unexpected ways... one of the reasons computer software developers call these kind figures "magic numbers".  When you have some very intelligent people who are complaining about an issue of this nature as having some significant impact, it is at least something which needs some attention.

The specifics on how to avoid this problem is the point of this thread, and a strong suggestion that "rules" ought to be incorporated into the network in terms of how to somehow allow this hard coded limit.

The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

I will say in regard to the control of the network by the miners, that is mostly true but not 100% of the time.  Blocks sent out by miners can also be rejected by "the vast masses of clients" who simply refuse to recognize a block.  Perhaps some other miner that fits within the rules set up by the network will do something that another miner doesn't take into consideration, and that particular block is simply going to be rejected.  With the rules as currently established, a miner who chooses to create a very large block is simply going to have that block ignored by the current network.  Essentially this is "proof" that the miners don't have absolute authority here.  Miners also work at the pleasure of the network as a whole, and have "constitutional limits" imposed upon them by the networking rules.  This particular issue with the maximum block size is one of the few rules that is outside of the control of a single miner.  Other kinds of similar rules could be adopted by a significant portion of the clients that may exclude certain miners or even groups of miners providing a check to a sort of "tyranny of the miners".

I'm not going to speculate about how such rules might be established or what other potential rules might be, other than suggesting that the block limit rule is one such rule and that needs to be somewhat reconsidered, certainly as a fixed size.  The long-term consequence is that without being changed, transaction fees may potentially escalate to absurd levels as more people trying to get the network to incorporate a particular transaction becomes a sort of "fees arms race", particularly when miners simply would be unable to get blocks of a larger size incorporated into the network.

The opposite situation is a voluntary self-limiting feature on miners who simply choose not to grow blocks to large sizes.  As long as somebody somewhere is allowed to have an arbitrarily large block, it will deal with the transactions with a low fee or perhaps no fee at all, even if it will take awhile to get those blocks incorporated into the network.
legendary
Activity: 1708
Merit: 1007
November 26, 2010, 10:59:39 AM
#41

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.

Austrian economic theory says that no one is smart enough, because no one can have all of the information.  The best that we can do is take a guess, and I would guess that it isn't going to be a real problem.  Certainly not in my lifetime.
hero member
Activity: 527
Merit: 500
November 26, 2010, 03:50:38 AM
#40
The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

We came to a similar conclusion in this thread:
http://bitcointalk.org/index.php?topic=1847.0;all
My concern is I don't see any inherent force that will stabilize transaction fees.

Generators have the ability to accept any transactions they see fit as well as reject any block that doesn't adhere to their "ethics". The question is; will this game result in an oligopoly of price gouging generators, will it result in a dead market where no one generates or will some competing forces reach a common ground of a fair stable fee structure.

I'm not smart enough to figure this out. I wish Satoshi would weigh in on this issue. I suspect he may have already envisioned the outcome.
sr. member
Activity: 416
Merit: 277
November 25, 2010, 10:13:48 PM
#39
To clarify, the block size limit is the size beyond which a received block will be rejected by a client just because it's too big.

I agree with caveden that having a fixed block size limit could cause problems in future.

Let's consider the scenario in which Bitcoin becomes popular and the non-spam transaction rate starts to rise. The current fees and priority scheme is fine until the size of the fees required becomes a disincentive for new users to start using Bitcoin. The miners must choose between taking a smaller fee from a given transaction or maintaining their fee schedule and effectively turning away lots of new users perhaps to other competing cryptographic currency schemes.
I think it's reasonable to imagine that everyone will decide to drop fees to a level that encourages the widest possible adoption of Bitcoin until other limiting factors (such as network bandwidth) come into play.
So with the reduced fees, block sizes increase until blocks get rejected by old clients with lower hard block size limits. These clients can't relay the new blocks and so new clients would have to only connect to other new clients. Miners which reject the large blocks would continue to build a block chain of "normal" sized blocks. As soon as transactions start to refer to coins in new large blocks then the old clients would reject these transactions and these coins could be double spent on the "old" client network. I don't think this would be pretty.

The ostensible reason for hard block limits is to prevent spam. As ribuck mentions current spam attacks have two effects, one which you can see and one that you can't. You can see block sizes rising but this is an effect which counteracts the less visible problem of your transaction cache filling up with spam transactions. I believe that memory exhaustion due to transaction cache filling will be the main problem with spam attacks so large blocks removing lots of transactions from it will mitigate it. The real solution to spam is "shunning" which I will outline in another post. I believe having any block limits is likely to exacerbate the adverse effects of spam transactions.

As FreeMoney observes, in the absence of block limits there's nothing to stop a miner from including arbitrary amounts of its own spam transactions in the block. This is true. However, it's certainly not in the non-generating client's interest to reject the block even if it only removes a few transactions from the cache. Rather the onus is on the other miners to notice that the new block does not remove enough transactions from the cache and reject it. They will then build the longer chain while ignoring that block which will be an orphan. Hence the spamming miner is punished.

The moral of this story is that the non-generating clients operate on the network at the pleasure of the miners. The miners are effectively in control of the "health" of the network and the current block size limits reflect that. So for example block http://blockexplorer.com/b/92037 is about 200455 bytes long and mostly contains spam. Normal blocks max out at 50k. This shows that at least one generator has chosen to waive the current fees scheme. I think that letting miners effectively decide their own fees scheme will be seen to be the least bad option.

ByteCoin
legendary
Activity: 1470
Merit: 1005
Bringing Legendary Har® to you since 1952
November 22, 2010, 10:13:14 AM
#38
I think i agree with caveden - having such an important constant hardcoded in bitcoin may be devastating at some point when the network changes significantly or grows much larger than it is now.
Generally, almost every important value in core of bitcoin algorithms should be a non-constant elastic variable which can adapt to changes.

Anyway, I still really would like to see Satoshi's & Gavin's opinions on this.
Pages:
Jump to: