Pages:
Author

Topic: The MAX_BLOCK_SIZE fork - page 6. (Read 35619 times)

sr. member
Activity: 389
Merit: 250
February 04, 2013, 02:05:05 PM
#93
So, shouldn't we (you developers actually) change it as fast as possible?
legendary
Activity: 1232
Merit: 1001
February 04, 2013, 01:21:24 PM
#92
This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.

I concur,

a kind of "natural selection" in a open marked ends in the best possible solution for the current environment (hardware)

this also allows is to adapt to better hardware as there is no way to tell a 100% certain where development will go. (At least that's my opinion)
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
February 04, 2013, 01:13:24 PM
#91
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

This.

I am all for "let-the-market-decide" elastic algorithms.

If you let people select what is best for their interests, they will make the best choices through multiple tries in order to maximize profit & minimize risk.

Nobody wants to lose money, and everybody wants to earn the most. Therefore market will balance out the block size and reach perfect equilibrium automatically.
legendary
Activity: 1792
Merit: 1111
February 04, 2013, 12:34:25 PM
#90
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).



And if there are too many transactions than the available block space, people will pay more transaction fee and miner will have more money to upgrade their hardware and network for bigger block size.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
February 04, 2013, 12:17:08 PM
#89
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).

hero member
Activity: 756
Merit: 522
February 04, 2013, 04:43:14 AM
#88
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

Actually sounds like correct behavior.
legendary
Activity: 1792
Merit: 1111
February 04, 2013, 02:42:36 AM
#87
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.

This is actually happening and forces some miners to drop transactions from Satoshi Dice to keep their blocks slimmer. Ignoring big blocks might not be intentional but big blocks are non-competitive for obvious reason (take longer to propagate)

May be I should rephrase it:

Therefore, if the majority of miners are unable to handle the 1GB block N timely, they will keep building on N-1 until N is verified. Block N is exposed to a higher risk of orphaning, and building 1GB block will become very risky and no one will do so.
kjj
legendary
Activity: 1302
Merit: 1026
February 04, 2013, 02:30:44 AM
#86
I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.

What you are describing is much worse than a mere fork, the only word I can think of for it is a shatter.
legendary
Activity: 1792
Merit: 1111
February 04, 2013, 02:07:09 AM
#85
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.

I know the 1MB is a hard limit which affects both miners and clients. I'm assuming a world without MAX_BLOCK_SIZE at all, both miners and clients.

Miners can ALWAYS drop a valid block if they don't like it, just like ignoring any valid transaction. Currently, miners taking non-standard transaction has higher risks of orphaned block because other miners may not like these block.

If a miner (Bob) sees a new valid block with height N but doesn't like it for whatever reason, he will simply keep mining on top of block N-1. When Bob finds another valid block (N2), he will broadcast to the network and other miners will choose one between N and N2. Here Bob takes a risk of being orphaned because other miners may build on block N. If block N+1 is built on N, Bob has to reconsider the risk and he may decide to keep mining on N+1, instead of N-1 or his N2. However, if Bob (or his team) owns 51% of the network, he will always win and block N must be eventually orphaned. (You may call it a 51% attack but this is exactly how the system works)

Therefore, if the majority of miners do not like 1GB block, building 1GB block will become very risky and no one will do so.
newbie
Activity: 24
Merit: 1
February 04, 2013, 01:39:40 AM
#84
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

Unfortunately it's not that simple for a couple reasons.

First, right now clients will reject oversized blocks from miners.  Other miners aren't the only ones who need to store the blocks, all full nodes do even if they just want to transact without mining.  So what if all the miners are fine with the 1-GB block and none of the clients nodes are?  Total mess.  Miners are minting coins only other miners recognize, and as far as clients are concerned the network hash rate has just plummeted.

Second, right now we have a very clear method for determining the "true" blockchain. It's the valid chain with the most work.  "Most work" is easily verified, everyone will agree.  "Valid" is also easily tested with unambiguous rules, and everyone will agree.  Miners can't "simply drop" blocks they don't like.  Maybe if that block is at depth -1 from the current block, sure.  But what if someone publishes a 1GB block, then someone else publishes a 1MB block on top of that?  Do you ignore both?  How far back do you go to start your own chain and try to orphan that whole over-size branch?

I think you can see the mess this would create.  The bitcoin network needs to operate with nearly unanimous consensus.
legendary
Activity: 1792
Merit: 1111
February 04, 2013, 12:00:22 AM
#83
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.

Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.
kjj
legendary
Activity: 1302
Merit: 1026
February 03, 2013, 11:44:59 PM
#82
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.

If car dealerships sold cars for however much you were willing to pay, down to and including free, you can bet they'd limit the number of cars they "sold".  And I doubt you'd even get 10 out of them.

The problem is that we really don't know yet how to operate with the system we have, much less a different one.  In a decade or two, when the subsidy is no longer the dominant part of the block reward, maybe then we'll have some idea how to price transactions, and we will be able to think clearly about mechanisms to adjust the block size.
hero member
Activity: 991
Merit: 1011
February 03, 2013, 10:33:52 PM
#81
An initial split ensuring "high or reasonable" fee transactions get processed into the blockchain within an average of 10 minutes, and "low or zero" fee transactions get processed within an average of 20 minutes might be the way to go.

Consider the pool of unprocessed transactions:

Each transaction has a fee in BTC and an origination time. If the transaction pool  is sorted by non-zero fee size then: fm =  median (middle) fee value.

[...]

The public would learn that low or zero fee transactions take twice as long to obtain confirmation. It then opens the door for further granularity where the lower half (or more) of the pool is divided 3, 4, or 5 times such that very low-fee transactions take half an hour, zero-fee transactions take an average of an hour. The public will accept that as normal. Miners would reap the benefits of a block limit enforced fee incentive system.

i doubt that transactions are that evenly distributed over a 24h or a 7day period. you might end up with all low-fee transactions bring pushed several hours, to times when rush hour is only for people living in the middle of the atlantic or pacific ocean.
which is imho perfectly okay for tips or micro-donations.

legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 03, 2013, 05:55:56 PM
#80
I have 2.5GB of other people's gambling on my hard drive, because it's cheap.

Snap! Me too.

Zero-fee transactions are an overhead to bitcoin. One benefit of them might be to encourage take-up by new users maintaining momentum of growth. If they need to be discouraged, then agreed, it could be done by using the max block size limit.

An initial split ensuring "high or reasonable" fee transactions get processed into the blockchain within an average of 10 minutes, and "low or zero" fee transactions get processed within an average of 20 minutes might be the way to go.

Consider the pool of unprocessed transactions:

Each transaction has a fee in BTC and an origination time. If the transaction pool  is sorted by non-zero fee size then: fm =  median (middle) fee value.

The block size limit is then dynamically calculated to accommodate all transactions with a fee value > fm plus all the remaining transactions with an origination time > 10 minutes ago.  If a large source of zero fee transactions tried to get around this by putting, say, a 17 satoshi fee on all its transactions then fm would likely be 17 satoshis, and these would still get delayed. A block limit of 10x the average block size during the previous difficulty period is also a desirable safeguard.

The public would learn that low or zero fee transactions take twice as long to obtain confirmation. It then opens the door for further granularity where the lower half (or more) of the pool is divided 3, 4, or 5 times such that very low-fee transactions take half an hour, zero-fee transactions take an average of an hour. The public will accept that as normal. Miners would reap the benefits of a block limit enforced fee incentive system.
legendary
Activity: 1400
Merit: 1013
February 03, 2013, 03:47:03 PM
#79
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
How many business can you name that maximize their profitability by restricting the number of customers they serve?

If it really worked like that, then why stop at 1 MB? Limit block sizes to a single transaction and all the miners would be rich beyond measure! That would certainly make things more decentralized because miners all over the world would invest in hardware to collect the massive fee that one lucky person per block will be willing to pay.

Why stop there? I'm going to start a car dealership and decide to only sell 10 cars per year. Because I've made the number of cars I sell a limited resource I'll be able to charge more for them, right?

Then I'll open a restaurant called "House of String-Pushing" that only serves regular food but only lets in 3 customers at a time.
legendary
Activity: 1232
Merit: 1001
February 03, 2013, 03:33:16 PM
#78
If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.

That's why transaction space should IMO be scarce. But not hard limited.

A hard limit cap will just make a transaction impossible at a certain point, no matter how high the fees paid. If we would have 1Mil. legit transactions a day, with the 1MB limit 400k would never be confirmed, no matter the fees.

A algorithm adjusting max. blocksize in a way that transaction space remains scarce, but ensuring all transactions can be put into the blockchain is IMO a reasonable solution.

Ensuring fees would always have to be paid for fast transactions, but also ensuring every transaction has a chance to get confirmed.
legendary
Activity: 1246
Merit: 1016
Strength in numbers
February 03, 2013, 03:22:54 PM
#77
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.
Can you walk me through the reasoning that you used to conclude that bitcoin will remain more secure if it's limited to a fixed number of transactions per block?

Are you suggesting more miners will compete for the fees generated by 7 transactions per second than will compete for the fees generated by 4000 transactions per second?

If the way to maximize fee revenue is to limit the transaction rate, why does Visa, Mastercard, and every business that's trying to maximize their revenue process so many of them?

If limiting the allowed number of transactions doesn't maximize revenue for any other transaction processing network, why would it work for Bitcoin?

If artificially limiting the number of transactions reduces potential revenue, how does that not result in fewer miners, and therefore more centralization?

In what scenario does your proposed solution not result in the exact opposite of what you claim to be your desired outcome?

If space in a block is not a limited resource then miners won't be able to charge for it, mining revenue will drop as the subsidy drops and attacks will become more profitable relative to honest mining.
legendary
Activity: 1400
Merit: 1013
February 03, 2013, 11:35:10 AM
#76
Without a sharp constraint on the maximum blocksize there is currently _no_ rational reason to believe that Bitcoin would be secure at all once the subsidy goes down.
Can you walk me through the reasoning that you used to conclude that bitcoin will remain more secure if it's limited to a fixed number of transactions per block?

Are you suggesting more miners will compete for the fees generated by 7 transactions per second than will compete for the fees generated by 4000 transactions per second?

If the way to maximize fee revenue is to limit the transaction rate, why does Visa, Mastercard, and every business that's trying to maximize their revenue process so many of them?

If limiting the allowed number of transactions doesn't maximize revenue for any other transaction processing network, why would it work for Bitcoin?

If artificially limiting the number of transactions reduces potential revenue, how does that not result in fewer miners, and therefore more centralization?

In what scenario does your proposed solution not result in the exact opposite of what you claim to be your desired outcome?
newbie
Activity: 24
Merit: 1
February 03, 2013, 07:17:55 AM
#75
Wait, no, I spoke too soon.  The fee/reward ratio is a bit too simplistic. 
An attacker could publish one of those botnet type of blocks with 0 transactions.  But instead, fill the block with spam transactions that were never actually sent through the network and where all inputs and outputs controlled by the attacker.  Since the attacker also mines the block, he then gets back the large fee.  This would allow an attacker to publish oversized spam blocks where the size is only limited by the number of bitcoins the attacker controls, and it doesn't cost the attacker anything.  In fact he gets 25BTC with each sucessful attack.  So an attacker controlling 1000BTC could force a 40MB spam block into the blockchain whenever he mines a block.

Not the end of the world, but ugly. 
There are probably other holes in the idea too.
Anyway, I'm just suggesting that something akin to a (total fee/block reward) calculation may be useful.  Not sure how you'd filter out spammers with lots of bitcoins.  And filtering out spammers was the whole point (at least according to Satoshi's comments) of the initial 1MB limit.

I'll keep pondering this, though I guess it's more about WHAT the fork might be, rather than HOW (or IF) to do it.
newbie
Activity: 24
Merit: 1
February 03, 2013, 06:52:57 AM
#74
Some ideas to throw into the pile:

Idea 1: Quasi-unanimous forking.

If the block size for is attempted, it is critical to minimize disruption to the network.  Setting it up well in advance based on block number is OK, but that lacks any kind of feedback mechanism.  I think something like
Code:
if( block_number > 300000) AND ( previous 100 blocks are all version > 2)
then { go on and up the MAX_BLOCK_SIZE }
Maybe 100 isn't enough, if all of the blocks in a fairly long sequence have been published by miners who have upgraded, that's a good indication that a very large super-majority of the network has switched over.  I remember reading something like this in the qt-client documentation (version 1 -> 2?) but can't seem to find it.

Alternatively, instead of just relying on block header versions, also look at the transaction data format version (first 4 bytes of a tx message header). Looking at the protocol it seems that every tx published in the block will also have that version field, so we could even say "no more than 1% of all transactions in the last 1000 blocks of version 2 means it's OK to switch to version 3".

This has the disadvantage of possibly taking forever if there are even a few holdouts (da2ce7? Grin), but my thinking is that agreement and avoiding a split blockchain is of primary importance and a block size change should only happen if it's almost unanimous.  Granted, "almost" is ambiguous: 95%?  99%?  Something like that though.  So that anyone who hasn't upgraded for a long time, and somehow ignored all the advisories would just see blocks stop coming in.

Idea 2:  Measuring the "Unconfirmable Transaction Ratio"
I agree with gmaxwell that an unlimited max block size, long term, could mean disaster.  While we have the 25BTC reward coming in now, I think competition for block space will more securely incentivize mining once the block reward incentive has diminished.  So basically, blocks should be full.  In a bitcoin network 10 years down the road, the max_block_size should be a limitation that we're hitting basically every block so that fees actually mean something.  Lets say there are 5MB of potential transactions that want to get published, and only 1MB can due to the size limit.  You could then say there's a 20% block inclusion rate, in that 20% of the outstanding unconfirmed transactions made it into the current block.

I realize this is a big oversimplification and you would need to more clearly define what constitutes that 5MB "potential" pool.  Basically you want a nice number of how much WOULD be confirmed, except can't be due to space constraints.  Every miner would report a different ratio given their inclusion criteria.  But this ratio seems like an important aspect of a healthy late-stage network.  (By late-stage I mean most of the coins have been mined)  Some feedback toward maintaining this ratio would seem to alleviate worries about mining incentives. 

Which leads to:

Idea 3:  Fee / reward ratio block sizing.

This may have been previously proposed as it is fairly simple.  (Sorry if it has; I haven't seen it but there may be threads I haven't read.)

What if you said:
Code:
MAX_BLOCK_SIZE = 1MB + ((total_block_fees / block_reward)*1MB)
so that if the block size would scale up as the multiple of the reward.  So right now, if you wanted a 2MB block, there would need 25BTC total fees in that block.  If you wanted a 10MB block, that's 250BTC in fees.

In 4 years, when the reward is 12.5BTC, 250BTC in fees will allow for a 20MB block.
It's nice and simple and seems to address many of the concerns raised here.  It does not remove the freedom for miners to decide on fees -- blocks under 1MB have the same fee rules.  Other nodes will recognize a multiple-megabyte block as valid if the block had tx fees in excess of the reward (indicative of a high unconfirmable transaction ratio.)

Problems with this is it doesn't work long term because the reward goes to zero.  So maybe put a "REAL" max size at 1GB or something, as ugly as that is.  Short / medium term though it seems like it would work. You may get an exponentially growing max_block size, but it's really slow (doubles every few years).  Problems I can think of are an attacker including huge transaction fees just to bloat the block chain, but that would be a very expensive attack.  Even if the attacker controlled his own miners, there's a high risk he wouldn't mine his own high-fee transaction.

Please let me know what you think of these ideas, not because I think we need to implement them now, but because I think thorough discussion of the issue can be quite useful for the time when / if the block size changes.
Pages:
Jump to: