Pages:
Author

Topic: The MAX_BLOCK_SIZE fork (Read 35545 times)

Fry
newbie
Activity: 45
Merit: 0
April 19, 2013, 07:50:01 PM

This rule would apply to blocks until they are 1 deep, right? Do you envision no check-time or size rule for blocks that are built on? Or a different much more generous rule?


Even if this rule only applys as long as the difference is one Block:
what is if both branches of the fork have the same deep?
And how could one node know for sure it is on the shorter Branch if it can not check the Blocks of the other Branch because they are to large to be transfered or checked?
Fry
newbie
Activity: 45
Merit: 0
April 19, 2013, 07:44:58 PM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).





This would make it very easy for a miner to fork the blockchain.
He would just have to create a Block that's so large that it gets rejected by half of the Network.
He could then fork one branch of the fork again.
This branch could be forked again and so on... until the mining power on one branch is so low that he could perform a 51% Attack on that branch.
member
Activity: 110
Merit: 10
April 19, 2013, 03:25:07 PM
If the BTC chain can't be successfully forked, there's always the option of starting a new, entirely separate cryptocurrency with the same rules as BTC but with a higher block size... Then in the event that BTC can't handle its transaction volume, people will naturally want to move to this new altcoin, until eventually the majority have switched over, and that new altcoin becomes the new de-facto Bitcoin.
legendary
Activity: 1722
Merit: 1217
April 19, 2013, 03:06:32 PM
I believe it is good to have possible future change in protocol, since no one can predict future with today's environment. But then some kind of consensus based voting/poll mechanism should become a practice

this is defacto required for the fork to be adopted. If there is not enough consensus than the devs attempts to fork the chain will fail all on its own.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
April 19, 2013, 02:50:29 PM
I believe it is good to have possible future change in protocol, since no one can predict future with today's environment. But then some kind of consensus based voting/poll mechanism should become a practice
newbie
Activity: 37
Merit: 0
April 19, 2013, 08:05:42 AM
Quote
Yes, they are. This is what being a developer means in this context: that you are a servant. A slave, if you prefer that terminology. One who obeys. An inferior. A steward. Nobody, politically speaking. I'm running out of alternative ways to put this, but I would hope you get the idea.
I think you get that wrong. Being a developer in no way implies that you are a serve.

Lead developers in open-source projects get titles such as benevolent dictator.

In the case of bitcoin, the lead developer gets payed by the foundation and the foundation has a bunch of important stakeholders in it. The foundation together has probably the political power to do anything with bitcoin that it likes whether or not you approve.
Quote
I agree with those who push for a formal specification for the protocol instead of letting the reference implementation be the protocol. It is hard work, but the way to go IMO.
Are you willing to pay for that hard work to be done?
legendary
Activity: 2940
Merit: 1090
March 17, 2013, 02:52:57 PM
Everybody understands what replacing 1MB with 10MB means, it's no rocket science.

Yeah, its the same kind of math as replacing $10 bitcoins with $100 bitcoins. Tongue

But wait, we're only at $50! Maybe try 5MB ?

-MarkM-
newbie
Activity: 58
Merit: 0
March 17, 2013, 02:32:28 PM
How about no? Making it 10MB would just necessitate another hard fork in the future. We should have as few hard forks as possible, so make it dynamic somehow so that that part of the protocol need not ever be changed again.

One of the key elements of Bitcoin is decentralization via the P2P network. Average Joe needs to be able to run a full node. In my opinion 10MB blocks (some 100MB per hour) are acceptable for average Joe these days, but 100MB blocks (some 1GB per hour) are bit too much now (maybe ok in 1-2 years). Unfortunately there is no reliable indicator for Bitcoin to know what are current network speeds and hard-disk sizes. If tying it to past results, big miners could game such system. I can't see any good dynamic solution. Imho if it can have one hard fork now, then it can have another after 1-2 years. Everybody understands what replacing 1MB with 10MB means, it's no rocket science.
hero member
Activity: 501
Merit: 500
March 17, 2013, 01:00:29 PM
Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.

Yes please, 10 megabyte per block is the right answer.


How about no? Making it 10MB would just necessitate another hard fork in the future. We should have as few hard forks as possible, so make it dynamic somehow so that that part of the protocol need not ever be changed again.
newbie
Activity: 58
Merit: 0
March 17, 2013, 07:47:35 AM
Certainly from a software engineering point of view, medium-term scalability is a trivial problem. An extra zero in the
Code:
static const unsigned int MAX_BLOCK_SIZE = 1000000;
line would be fine for a good while.

Yes please, 10 megabyte per block is the right answer.

Also 100 MB may be considered in the future, but for some users it would be lots of traffic these days.
Also it would be nice to have the old tx pruning function some day as the database starts growing faster.

So... what happens then?  What is the method for implementing a hard fork?  No precedent, right?  Do we have a meeting?  With who?  Vote?  Ultimately it’s the miners that get to decide, right?  What if the miners like the 1MB limit, because they think the imposed scarcity of blockchain space will lead to higher transaction fees, and more bitcoin for them?  How do we decide on these things when nobody is really in charge?  Is a fork really going to happen at all?

Actually there is a hard fork in progress right now: there is a database locking glitch in <=v0.7.2 so everyone needs to upgrade to v0.8.1 by 15/May/2013.

Increasing block size does not seem significantly different.
First there is selected a block number from which the increased block size applies. For example block >=261840 which is expected around Oct 2013.
Then - few months before Oct 2013 - this logic is implemented in the full-node clients - Satoshi client, bitcoinj, ... - and people are asked to upgrade, or have their old clients stop working after Oct 2013.
That's all.
legendary
Activity: 1596
Merit: 1091
March 17, 2013, 12:38:55 AM
It's not the miners who make the call in the max_block_size issue, right?  I mean, all the miners could gang up and say: "we're not going to process blocks with more than 2 transactions," if they wanted too.  It's the validating nodes that make the call as far as what will be a valid block.  If all the nodes ganged up, they could change the limit as well. 

Correct.

Any miner that increases MAX_BLOCK_SIZE beyond 1MB will self-select themselves away from the network, because all other validating nodes would ignore that change.

Just like if a miner decides to issue themselves 100 BTC per block.  All other validating nodes consider that invalid data, and do not relay or process it further.

donator
Activity: 1463
Merit: 1047
I outlived my lifetime membership:)
March 16, 2013, 10:08:45 PM
It's not the miners who make the call in the max_block_size issue, right?  I mean, all the miners could gang up and say: "we're not going to process blocks with more than 2 transactions," if they wanted too.  It's the validating nodes that make the call as far as what will be a valid block.  If all the nodes ganged up, they could change the limit as well. 

I think miners should keep their own market determined limit on transactions.  If I was a big pool, I'd make people pay for access to my speedy blocks.  They're being nice processing no fee tx's as it is.
legendary
Activity: 1372
Merit: 1002
March 16, 2013, 09:45:01 AM
I agree with those who push for a formal specification for the protocol instead of letting the reference implementation be the protocol. It is hard work, but the way to go IMO.
Miners will have an incentive to upgrade just to be closer to the specifications and have less risk of being on the "wrong side" of the fork, as the newest version of the reference implementation will probably be always the one that first implements the newest version of the spec. Like cpython is the reference implementation for python, the specifications of the language. Yes, cpython can sometimes not comply with python, and those are bugs to be fixed.
Actually, I thought we had something like that already with the wiki.
legendary
Activity: 2940
Merit: 1090
March 15, 2013, 06:35:19 AM
Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  Shocked

Ha ha, nice way of looking at it. I won't presume to check your math, but really, even if you dropped or picked up an order of magnitude that still sounds like its lucky for us that not everyone in the world is on the internet yet.

-MarkM-
full member
Activity: 124
Merit: 100
March 15, 2013, 06:05:37 AM
Have only read OP post, but are you saying that if everyone in the world tried to do a transaction right now that it would take 30+ years to verify them all (assuming the hardware and software remained unchanged)? Wow!  Shocked
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
March 15, 2013, 12:55:33 AM
Quote
Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.

Quote
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly ...

Quite. My suggestion was just an example, a hint towards engaging in some lateral thinking on the exact nature of the underlying problem we are facing here. It could be mem. locks (I hope not), it could be physical RAM space, it could mem. accesses, CPU cycles, total HD storage space occupied network-wide.

Point being, we are searching for a prescription for the physical atomic limitation, as a network rule, that needs to priced by the market of miners validating transactions and the nodes storing them, so that fees are paid and resources allocated correctly in such a way that the network scales, in the way that we already know that it theoretically can.

If we are going to hard fork, lets make sure it is for a justifiable, quantifiable reason. Or we could be merely embarking on holy grail pursuit for the 'best' DB upgrades to keep scaling up, endlessly.

Bitcoin implementations could be DB agnostic if it were to use the right metric. Jeff Garzik has some good ideas like a "transactions accessed" metric, as I'm sure others do also. Maybe some kind scale-independent transactions accessed block limit and fee scheduling rule?
donator
Activity: 668
Merit: 500
March 14, 2013, 09:38:04 PM
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-

Not only that, but it's easy to imagine implementations where verification is done in a single-threaded process that is isolated and does nothing else (and hence locks of any kind are entirely unnecessary), and any persistent db is maintained separately.

It's a shame the original bitcoin implementation was a monolothic piece of poor code.  The lack of clean separation of responsibilities is really hindering its progress and testing.
legendary
Activity: 2940
Merit: 1090
March 14, 2013, 06:51:03 PM
Excellent reasoning... If bitcoin was about locking databases that would be just the very kind of quantification that should go into its specification.

Unfortunately, bitcoin is a little more about attaining, storing and transmitting a proven state of consensus than about controlling how many entities can access how many bits of it, and which bits, exactly, for that matter. If number of simultaneous accesses to the data enters into the spec it ought more to lean toward "as many as possible, even if that means having more than one complete copy of the entire dataset in existence on the planet at any given moment".

Basically, bitcoin is intended to give lots of entities access to the same data, so locks are actually anathema to its entire purpose and goal.

Thus while acknowledging the brilliance of your solution I find myself forced to say sorry but it still seems to me that number of database locks in a single corporation or other entity's living executing implementation or copy of an implementation is one of the many things that should if at all possible be abstracted away by the second star to the right and straight on 'til morning specification.

-MarkM-
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
March 14, 2013, 04:38:29 PM
How can it be a bug if it is a clearly defined behaviour in the documentation of the s/ware dependency?
Ah, excellent, can you please send me the documentation that says exactly how many locks will be taken by each bdb operation?  I haven't been able to find that.  Thanks!


Well that's implementation specific, as the tutorial states, (posted above), sorry can't be more helpful but I am looking into it so you'll be first to know if I find anything.

Pre-0.8 bitcoin have specified the limits for their BDB implementation in db.cpp lines 82 and 83

Code:
    
dbenv.set_lk_max_locks(10000);
dbenv.set_lk_max_objects(10000);

Most definitely a limitation but not a bug nor an "unknown behaviour" is the main point.  Unfortunately this is a proxy limitation on bitcoin block sizes, only loosely correlated with block data size, that we will have to live with for now, and we have lived with for some time now.

Maybe look at basing transaction fees on the number of locks a transaction uses is another angle for a solution? Loosely specifying "1MB" as the block limit is in fact abstracting a data size away from the actual physical configuration of how that data is stored/accessed which is what actually defines the total transaction cost, includes CPU cycles, disk read/writes and storage.

legendary
Activity: 2940
Merit: 1090
March 14, 2013, 07:08:19 AM
Argh... BDB doesn't allow for an unlimited number of locks? (how many your system can handle, that is)
It really needs to specify a limit?

Well, legend has it that once upon a time some systems had more things to do than just lock pages of databases, so, in short, yes.

Much as we have a maximum block size... Wink

So hey, y'know, maybe that darn magic number actually is a darn specification not an implementation artifact after all?

No, wait... it merely means that it is not only the maximum size of the blocks but also the maximum size of the block index at a given time (as measured in block height) that is a crucial limit to specify; and, happily enough, it turns out that the max size of a block has a relatively predictable effect upon the size of the block index at any given "height"?

-MarkM-

EDIT: Do we need to account for orphan blocks, too? Thus need to specify an orphan rate tolerance?
Pages:
Jump to: