Pages:
Author

Topic: The MAX_BLOCK_SIZE fork - page 3. (Read 35546 times)

legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
March 12, 2013, 11:34:33 PM

No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.


For the last time IT WAS NOT A BUG!

http://www.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html

0.8 levelDB as implemented by Mike Hearn (who also propagated the "just bump you block limit meme with the miners) did not faithfully emulate BDB, which it was minimally required to do.

Like I said, you do not fully understand the problem so are not qualified to comment any further.
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
March 12, 2013, 11:16:25 PM

No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?

Criticizing 0.8 for not emulating an unknown bug (let alone that it was in 3rd-party software) is itself FUD.
It appears 60% of the network would have recognized the problem block. If more people were prepared to upgrade in a timely manner then it might have been closer to 90% and a minor issue arguably leaving a better situation than exists now.


Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1

Yes. I agree with that because of where the situation is now.

legendary
Activity: 2282
Merit: 1050
Monero Core Team
March 12, 2013, 11:08:36 PM

Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.

+1
legendary
Activity: 1904
Merit: 1002
March 12, 2013, 10:58:28 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



Please give the development team time to put together a plan.  If the majority of miners are on 0.8, a single bad actor can cause another fork by making a block with too many transactions for <= 0.7 to handle.
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
March 12, 2013, 10:34:15 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!



No, you misunderstand the problem and in the process spreading FUD. 0.8 LevelDB was required to emulate BDB behaviour and it didn't.

Rushing everyone onto 0.8 is asking for problems.

Deepbit has been prudent and a pillar of defending the blockchain and you are pressuring them to do what exactly?
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
March 12, 2013, 05:03:51 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.

Yes. All miners should be migrating to v0.8 as soon as possible (while maintaining default limits), so that the above is no longer a factor.
Edit 0.7 until 0.8.1 is available.

General question. Is Deepbit too conservative for its own good?  
They are refusing to upgrade from version 0.3. Deepbit, please prove me wrong!

legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
March 12, 2013, 03:18:54 PM
Watching.

Seems BDB's MAX_LOCK needs to be taken into account also, for backward compatibility.
legendary
Activity: 1400
Merit: 1009
February 28, 2013, 02:57:05 PM
There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).
Using this proposal all nodes could select for themselves what block size they are willing to accept. The only part that is missing is to communicate this information to the rest of the network somehow.

Each node could keep track of the ratio of transaction size to verification time averaged over a suitable interval. Using that number it could calculate the maximum block size likely to meet the time constraint, and include that maximum block size in the version string it reports to other nodes. Then miners could make educated decisions about what size of blocks the rest of the network will accept.
legendary
Activity: 1106
Merit: 1004
February 25, 2013, 04:24:37 AM
By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.

The block subsidy will be determined by the free market once inflation is no longer relevant, iff the block size limit is dropped. Even Bitcoin inflation itself in a sense may one day be determined by the free market, if we start seeing investment assets quoted in Bitcoin being traded with high liquidity: such highly-liquid BTC-quoted assets would end up being used in trades, and would become a flexible monetary aggregate. Fractional reserves is not the only way to do it.

Concerning the time between blocks, there have been proposals of ways to make such parameter fluctuate according to supply and demand. I think it was Meni Rosomething, IIRC, who came up once with such ideas. Although potentially feasible, that's a technical risk that might not be worthy taking. Perhaps some alternative chain will try it one day, and if it really shows itself worthwhile as a feature, people might consider it for Bitcoin, why not. I'm just not sure it's that important, 10 min seems to be fine enough.
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
February 25, 2013, 01:02:54 AM
It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.


And that type of argument takes us nowhere. There have been thousands of comments on the subject and we need to close in on a solution rather than spiral away from one. I have seen your 10 point schedule for what happens when the 1Mb blocks are saturated. There is a some probability you are right, but it is not near 100%, and if you are wrong then the bitcoin train hits the buffers.

Please consider this and the next posting:
https://bitcointalksearch.org/topic/m.1556506

I am equally happy with Gavin's solution which zebedee quotes. Either is better than letting a huge unknown risk become a real event.
legendary
Activity: 1064
Merit: 1001
February 25, 2013, 12:51:10 AM
It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.

By this logic, we should leave it up to the free market to determine the block subsidy. And the time in between blocks.
donator
Activity: 668
Merit: 500
February 25, 2013, 12:36:34 AM
Why don't we just let miners to decide the optimal block size?

If a miner is generating an 1-GB block and it is just too big for other miners, other miners may simply drop it. It will just stop anyone from generating 1-GB blocks because that will become orphans anyway. An equilibrium will be reached and the block space is still scarce.

I think this is exactly the right thing to do.

There is still the question of what the default behavior should be. Here is a proposal:

Ignore blocks that take your node longer than N seconds to verify.

I'd propose that N be:  60 seconds if you are catching up with the blockchain.  5 seconds if you are all caught-up.  But allow miners/merchants/users to easily change those defaults.

Rationale: we should use time-to-verify as the metric, because everything revolves around the 10-minutes-per-block constant.

Time-to-verify has the nice property of scaling as hardware gets more powerful. Miners will want to create blocks that take a reasonable amount of time to propagate through the network and verify, and will have to weigh "add more transactions to blocks" versus "if I add too many, my block will be ignored by more than half the network."

Time-to-verify also has the nice property of incentivizing miners to broadcast transactions instead of 'hoarding' them, because transactions that are broadcast before they are in a block make the block faster to verify (because of the signature cache). That is good for lots of reasons (early detection of potential double-spends and spreading out the verification work over time so there isn't a blizzard of CPU work that needs to be done every time a block is found, for example).


I'm a bit late to this discussion, but I'm glad to see that an elastic, market-based solution is being seriously considered by the core developers.

It is clearly the right way to go to balance the interests of all concerned parties.  Free markets are naturally good at that.
Ari
member
Activity: 75
Merit: 10
February 11, 2013, 01:22:12 PM
As much as I would like to see some sort of constraint on blockchain bloat, if this is significantly curtailed then I suspect S.DICE shareholders will invest in mining.  I suspect that getting support from miners was a large part of the motivation for taking S.DICE public, as there clearly wasn't a need to raise capital.
legendary
Activity: 1064
Merit: 1001
February 11, 2013, 12:47:49 PM
Maybe i've got a rig of asics and i can process a lot more mb in 10 minutes instead of pushing up the dificulty. So, maybe a GPU miner, can mine 1mb blocks, and an asic miner can mine 20 mb blocks for the same difficulty

The time required to mine the block is independent of the size of the block.

sr. member
Activity: 527
Merit: 250
February 11, 2013, 10:43:55 AM
Why don't we set up a 1kb limit? That way the miners will earn a lot more from fees Smiley [/ironic]

Oh yes! because that way people would start making trades off the chain, and miners won't get those 'big fees' they are coveting. There's an optimum where the miners have the biggest earnings and don't encourage people to use whatever else coin (phisical or virtual) to trade.

I am all for "let-the-market-decide" elastic algorithms.

Right now i like this.

I think 1mb should be the smallest limit, but maybe i want to accept 4 mb of transactions, for whatever reason, and earn a lot more from fees. Maybe i've got a rig of asics and i can process a lot more mb in 10 minutes instead of pushing up the dificulty. So, maybe a GPU miner, can mine 1mb blocks, and an asic miner can mine 20 mb blocks for the same difficulty, having then the same odds of solving the problem.

I am just thinking aloud Cheesy
hero member
Activity: 991
Merit: 1008
February 11, 2013, 10:37:12 AM
very much depends on what you consider trivial. little more than two years ago, all copies of the blockchain all around the world would have fit on a single hard drive. right now, if every client would be a full node, we would need something around a thousand hard drives.

in a few years, we might easily end up with a few million or billion hard drives assuming everyone is a full node. so imho how this issue is handles directly determines how many full nodes we will have long term. i have no idea how many full nodes is acceptable, 1%? 0,1%? but i am pretty sure 0,0000001% wont cut it.
legendary
Activity: 1904
Merit: 1002
legendary
Activity: 1512
Merit: 1032
February 10, 2013, 08:10:09 PM
legendary
Activity: 2940
Merit: 1090
February 10, 2013, 07:49:41 PM
Don't forget merged mining. Smaller transactions could use one of the merged-mined  blockchains, there are several such blockchains already, this kind of pressure might just cause one or more of them to increase in popularity, and miners would still reap the fees.

-MarkM-
sr. member
Activity: 461
Merit: 251
February 10, 2013, 06:14:47 PM
Yeah.  If there's a major split, the <1MB blockchain will probably continue for a while.  It will just get slower and slower with transactions not confirming.  It would be better if there is a clear upgrade path so we don't end up with a lot of people in that situation.

You are assuming miners want to switch.  They have a very strong incentive to keep the limit in place (higher fees, lower storage costs).
Sometimes more customers paying less results in higher profit.  Miners will surely have an incentive to lower their artificially high prices to accommodate new customers, instead of having them all go with much cheaper off-blockchain competitors.
Pages:
Jump to: