Pages:
Author

Topic: Gold collapsing. Bitcoin UP. - page 40. (Read 2032247 times)

legendary
Activity: 1764
Merit: 1002
August 13, 2015, 01:55:17 PM
What about lying? If enough miners claim to support larger blocks but actually don't, then part of the network will waste time producing blocks that won't be built on.  IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.  However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.

I don't think it would be a problem.  Like Erdogan said, the miners will use the "tip-toe" method of increasing the block size.  Worst case, a large block gets orphaned and nobody tries again for a while.  But if the larger block doesn't get orphaned, then the network will assume that that size is now supported (thereby setting a new effective upper limit).

IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.

This doesn't put the power directly in the miners' hands.  It keeps the power where it already is: in everybody's hands!  It just makes it much easier for people to exercise the power they already possess.  

Quote
However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.

I disagree.  For example, I would not set my node's limit to anything greater than 32 MB until I understood the 33.5 MB message size limitation better.  I expect many people would do the same thing.  Rational miners won't dare to randomly publish a 100 MB block, because they'd be worried that it would be orphaned.

Furthermore, since miners would likely use the "tip-toe" method, the effective block size limit will grow only in very small increments, helping to reveal any potential limitations before they become problems.



yes, i've called this "advancing together" but "tip toeing" is even a better descriptor as it implies small baby steps upwards as opposed to random big steps.  miners will not only do what's best for themselves but what's best for the group.  they know that all hands on deck are needed as a team to replace the existing financial order.  where BitcoinXT is going there will be plenty of profits to be had for existing cooperative players as well as new entrants.  the stakes are enormous to the upside but individual miners cannot afford to be caught being dishonest or attacking or they will be left behind or severely deprecated ala ghash.  what a shame to miss out on being the next JPM as a result of being greedy.
legendary
Activity: 1372
Merit: 1000
August 13, 2015, 01:51:27 PM


what is the latest version of XT, is it still a test version?
legendary
Activity: 1162
Merit: 1007
August 13, 2015, 01:36:52 PM
What about lying? If enough miners claim to support larger blocks but actually don't, then part of the network will waste time producing blocks that won't be built on.  IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.  However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.

I don't think it would be a problem.  Like Erdogan said, the miners will use the "tip-toe" method of increasing the block size.  Worst case, a large block gets orphaned and nobody tries again for a while.  But if the larger block doesn't get orphaned, then the network will assume that that size is now supported (thereby setting a new effective upper limit).

IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.

This doesn't put the power directly in the miners' hands.  It keeps the power where it already is: in everybody's hands!  It just makes it much easier for people to exercise the power they already possess.  

Quote
However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.

I disagree.  For example, I would not set my node's limit to anything greater than 32 MB until I understood the 33.5 MB message size limitation better.  I expect many people would do the same thing.  Rational miners won't dare to randomly publish a 100 MB block, because they'd be worried that it would be orphaned.

Furthermore, since miners would likely use the "tip-toe" method, the effective block size limit will grow only in very small increments, helping to reveal any potential limitations before they become problems.

legendary
Activity: 1246
Merit: 1010
August 13, 2015, 01:28:23 PM

At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.

I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way.  What if, instead, we just make it very easy for node operators to adjust their block size limit.  Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ."  What would happen?  

Personally, I'd just select some big block size limit, like 32 MB.  This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes.  I'd expect many people to do the same thing.  Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size.  We all witness that indeed that block got included into the longest proof of work chain, and then suddenly all miners are confident producing 1.1 MB blocks.  Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.

TL/DR: maybe we don't need a strict definition for the max block size limit.

That is exactly what I think. The miners will have to try it out or get some feel of what they can do through other channels (social media, conferences, node versions), including associate with other miners. As long as the association is voluntary, it will not form a monopoly.


yes, this has been considered and discussed before.  The danger is that a large block miner cartel might develop naturally whose blocks put small-bandwidth players at a disadvantage.  But as others have mentioned, some people are at an electricity cost disadvantage, some bandwidth, some something else... basically it would just be another metric to take into account as you site your miners.

So I would be 100% for this if miners could only work with real txns.  But a miner could fill up a huge block with a bunch of "fake" (unrelayed, fee pays to himself) txns to artificially drive up network costs.  Its too bad Bitcoin doesn't have the "pay portion of fees to miner pool, receive portion for the next N blocks" feature... that idea closes a lot of miner loopholes.  

But regardless I'm not sure if this "loophole" really is one; it does require 51% of the network to be as connected as you are and willing to process your monster garbage block.  I have a hard time believing that miners would do so since over the long term they need bitcoin to succeed.  More likely (as you guys suggest) they'll just configure their nodes to ignore monster blocks unless > N deep in the chain.

legendary
Activity: 1414
Merit: 1000
August 13, 2015, 12:54:34 PM
[...]
So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.


I'd be willing to help! But I'd also suggest to just make it about the configurable setting and leave the rest to the user. I think signalling about blocksize has to happen out-of-band for the time being. Because it is potentially a lot of code complexity. And simple IMO beats complex here.

Just make it mandatory to start bitcoind with -maxblocksizelimit (or similar) and have an edit box for bitcoin-qt that has to be filled with a value. The amount of code change should be about the same as BIP101.

Start requesting this value at some switchover date in the future - maybe at the beginning of Gavin's increase schedule. Reason for this: Time for user education on building a function Bitcoin network.

What do you think?

I think that all makes perfect sense, and I agree that simple is better!  Perhaps the BIP could only advocate for doing what you said to start, and then there could be a follow-up BIP to do the signalling in the block headers and to add the p2p "block size limit request" messages.  The nice thing is that the signalling stuff in the follow-up BIP would have nothing to do with the consensus layer, so it would be much easier to build support for it.  

I'd be willing to contribute to this too.  Realistically, I couldn't do any serious work on this until mid-September, however.  Timing wise, it would be great to have a polished proposal published for the second Scalability Workshop in Hong Kong probably in November or December: https://scalingbitcoin.org/

I'm actually quite excited about this idea.  It has a sort of inevitable feel to it.


What about lying? If enough miners claims to support larger blocks but actually don't, then part of the network will waste time producing blocks that won't be built on.  IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.  However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.

+1
 - I can't wait how this bloat chains supporters will hit the wall. If it was so easy "just increase numbers and we will have 10 times more faster network ... increase to the infinity" :-)

 - please start XT tomorrow. I like fun.
hero member
Activity: 625
Merit: 501
x
August 13, 2015, 12:04:17 PM


I'm actually quite excited about this idea.  It has a sort of inevitable feel to it.


Yes.
I've been watching for months. This feels more right than any proposals I've seen to date.
legendary
Activity: 1904
Merit: 1002
August 13, 2015, 12:02:43 PM
[...]
So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.


I'd be willing to help! But I'd also suggest to just make it about the configurable setting and leave the rest to the user. I think signalling about blocksize has to happen out-of-band for the time being. Because it is potentially a lot of code complexity. And simple IMO beats complex here.

Just make it mandatory to start bitcoind with -maxblocksizelimit (or similar) and have an edit box for bitcoin-qt that has to be filled with a value. The amount of code change should be about the same as BIP101.

Start requesting this value at some switchover date in the future - maybe at the beginning of Gavin's increase schedule. Reason for this: Time for user education on building a function Bitcoin network.

What do you think?

I think that all makes perfect sense, and I agree that simple is better!  Perhaps the BIP could only advocate for doing what you said to start, and then there could be a follow-up BIP to do the signalling in the block headers and to add the p2p "block size limit request" messages.  The nice thing is that the signalling stuff in the follow-up BIP would have nothing to do with the consensus layer, so it would be much easier to build support for it.  

I'd be willing to contribute to this too.  Realistically, I couldn't do any serious work on this until mid-September, however.  Timing wise, it would be great to have a polished proposal published for the second Scalability Workshop in Hong Kong probably in November or December: https://scalingbitcoin.org/

I'm actually quite excited about this idea.  It has a sort of inevitable feel to it.


What about lying? If enough miners claim to support larger blocks but actually don't, then part of the network will waste time producing blocks that won't be built on.  IMO, if we want to put the power directly in miners hands it would be better to raise the limit entirely.  However, to do so we would need to test the crap out of everything to be reasonably sure that there aren't bugs that are only uncovered by larger blocks like what happened when the soft limit was raised to 1MB.
legendary
Activity: 4690
Merit: 1276
August 13, 2015, 11:57:49 AM

The problem is that the explanations for those observations are not correct, because they are tautological. They show up in conversations with goldbugs all the time.

Where do you live?  Strawman City?


Q: What is a store of value?
A: Anything that has the properties of gold.
...

Actually, Bitcoin is something I consider a 'store of value' for a close reason.  Neither Bitcoin nor gold have counter-party risk, and that is actually fairly unusual characteristic these days and one I value highly because I consider one of most acute risks associated with wealth preservation to be an economic crisis where one can kiss almost anything with counter-party (or theft) risk bye-bye.

What gold has over Bitcoin is that it does not require a free and high capacity internet to function.  The only 'bandwidth' that gold needs is a ticker signal and that could be accomplished even with fairly low latency without a functional internet at all.  Even then, this need is a nicety more than a necessity.

Under conditions of economic crisis I believe it almost certain that there will be a significant clamp-down on the free flow of information.  Unfortunately this is termed an 'internet kill switch' which is misleading.  I expect it to be implemented as a shift from simply monitoring internet traffic to actively blocking that of it which is potentially damaging to those seeking to maintain control and shape society.  IOW, we will still have access to our porn and mainstream movies from an authorized intellectual property owner through large corporate providers, but it will be to 'dangerous to society' for unauthorized people to communicate directly with one another or to allow subversive ideas and data to be disseminated.

Bitcoin used in certain sophisticated ways which force multiply the bandwidth still has 'store of value' potential to me which is why I am still a hodler.  The simple reason for this is that it can at least in theory function in a world which is much different than we see today but is very likely to exist at the time when it actually matters.

That, in a nutshell, is why I am so opposed to growing Bitcoin in simplistic ways which box us in to a reliance on assumptions about the global internet based on the common experiences of today and most people's expectations for tomorrow.  'Dumb growth' hacks off what many people consider to be a vestigial appendage but to me it represents a very big part of the 'store of value' proposition for Bitcoin and one that makes it somewhat competitive with gold.

legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
August 13, 2015, 11:55:53 AM

i really see no technical reasons why we can't have bigger blocks.  now.


Then you're not looking hard enough.  Here, I'll help:

Quote


Quote

The vast majority of research demonstrates that blocksize does matter, blocksize caps are required to secure the network, and large blocks are a centralizing pressure.

Here’s a short list of what has been published so far:

1) No blocksize cap and no minimum fee leads to catastrophic breakage as miners chase marginal 0 fees:

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2400519

It’s important to note that mandatory minimum fees could simply be rebated out-of-band, which would lead to the same problems.

2) a) Large mining pools make strategies other than honest mining more profitable:

    http://www.cs.cornell.edu/~ie53/publications/btcProcArXiv.pdf

2) b) In the presence of latency, some alternative selfish strategy exists that is more profitable at any mining pool size. The larger the latency, the greater the selfish mining benefit:

    http://arxiv.org/pdf/1507.06183v1.pdf

3) Mining simulations run by Pieter Wuille shows that well-connected peers making a majority of the hashing power have an advantage over less-connected ones, earning more profits per hash. Larger blocks even further favor these well-connected peers. This gets even worse as we shift from block subsidy to fee based reward :

    http://www.mail-archive.com/[email protected]/msg08161.html

4) Other point(s):

If there is no blocksize cap, a miner should simply snipe the fees from the latest block and try to stale that block by mining their own replacement. You get all the fees plus any more from new transactions. Full blocks gives less reward for doing so, since you have to choose which transactions to include. https://www.reddit.com/r/Bitcoin/comments/3fpuld/a_transaction_fee_market_exists_without_a_block/ctqxkq6

"I think this is a good idea" doesn't count.

Stamping your feet and demanding things be done "now" isn't going to help XT.

"Not tonight dear"   Cheesy
legendary
Activity: 1512
Merit: 1005
August 13, 2015, 11:43:39 AM
Note that this toe dipping is the reality also if we go to 2 MB. There could be some bug related to the network or whatever, that could slip through testing in all environments except the production blockchain. Heck, it is there now, unless we have had a block of excactly 1000000 bytes decimal. I guess some miners are not 100 percent certain that there is not an off-by-one bug there, so they just remove one transaction to be sure.
legendary
Activity: 1512
Merit: 1005
August 13, 2015, 11:22:57 AM

At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.

I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way.  What if, instead, we just make it very easy for node operators to adjust their block size limit.  Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ."  What would happen?  

Personally, I'd just select some big block size limit, like 32 MB.  This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes.  I'd expect many people to do the same thing.  Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size.  We all witness that indeed that block got included into the longest proof of work chain, and then suddenly all miners are confident producing 1.1 MB blocks.  Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.

TL/DR: maybe we don't need a strict definition for the max block size limit.

That is exactly what I think. The miners will have to try it out or get some feel of what they can do through other channels (social media, conferences, node versions), including associate with other miners. As long as the association is voluntary, it will not form a monopoly.
legendary
Activity: 1162
Merit: 1007
August 13, 2015, 11:20:31 AM
[...]
So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.


I'd be willing to help! But I'd also suggest to just make it about the configurable setting and leave the rest to the user. I think signalling about blocksize has to happen out-of-band for the time being. Because it is potentially a lot of code complexity. And simple IMO beats complex here.

Just make it mandatory to start bitcoind with -maxblocksizelimit (or similar) and have an edit box for bitcoin-qt that has to be filled with a value. The amount of code change should be about the same as BIP101.

Start requesting this value at some switchover date in the future - maybe at the beginning of Gavin's increase schedule. Reason for this: Time for user education on building a function Bitcoin network.

What do you think?

I think that all makes perfect sense, and I agree that simple is better!  Perhaps the BIP could only advocate for doing what you said to start, and then there could be a follow-up BIP to do the signalling in the block headers and to add the p2p "block size limit request" messages.  The nice thing is that the signalling stuff in the follow-up BIP would have nothing to do with the consensus layer, so it would be much easier to build support for it.  

I'd be willing to contribute to this too.  Realistically, I couldn't do any serious work on this until mid-September, however.  Timing wise, it would be great to have a polished proposal published for the second Scalability Workshop in Hong Kong probably in November or December: https://scalingbitcoin.org/

I'm actually quite excited about this idea.  It has a sort of inevitable feel to it.
legendary
Activity: 1400
Merit: 1013
August 13, 2015, 11:17:35 AM
I'd say the term "store of value" has meaning in the context our current world of fiat money, where you need a hedge against inflation. In the case of Bitcoin while it is still not yet mainstream I think a special definition is useful: an asset that retains or grows its purchasing power over the years (particularly in contrast with fiat money), with growth of course being considered even better as a store of value than simply staying level. Also the difficulty in confiscating it should be part of its store-of-value merits.
There is certainly a difference between forms of money that work well, and forms of money that don't, that many people have observed throughout history.

The problem is that the explanations for those observations are not correct, because they are tautological. They show up in conversations with goldbugs all the time.

Q: What is a store of value?
A: Anything that has the properties of gold.

Q: Why is gold a store of value?
A: Because it has intrinsic value.

Q: What is value?
A: It's what anything with the properties of gold has.

Q: Can you give reason why I should buy your gold other than that you want to sell it?
A: ...no.
hero member
Activity: 707
Merit: 500
August 13, 2015, 11:08:00 AM

At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.

I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way.  What if, instead, we just make it very easy for node operators to adjust their block size limit.  Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ."  What would happen?  

Personally, I'd just select some big block size limit, like 32 MB.  This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes.  I'd expect many people to do the same thing.  Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size.  We all witness that indeed that block got included into the longest proof of work chain, and then suddenly all miners are confident producing 1.1 MB blocks.  Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.

TL/DR: maybe we don't need a strict definition for the max block size limit.

that's just a re-write of what i've been advocating; lift the limit entirely.

but yeah, your idea is great b/c it would give full node operators a sense of being in charge via a pull down menu.  i like it.

don't forget that mining pools are just huge hashing overlays of full nodes which they operate and could use to do the same type of voting.

Yes, you have been essentially advocating the same thing.  

We could take this idea further: in addition to the drop-down menu where node operators and miners select the max block size they'll accept, we could add two new features to improve communication of their decisions:

  1.  The max block size selected by a node would be written into the header of any blocks the node mines.

  2.  The P2P protocol would be extended so that nodes could poll other nodes to find out their block size limit.

This would be a highly decentralized way of coming to consensus in a very flexible and dynamic manner.  

It would be a recognition that the block size limit is not part of the consensus layer, but rather part of the transport layer, as sickpig suggested:

you know what I can't stop thinking that the max block size is a transport layer constraint that have crept in consensus layer.

The network would dynamically determine the max block size as the network evolves by expressing the size of the blocks they will accept with the drop-down menu on their client.

So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.


It's a wonderful idea! It scales dynamically by reaching a consensus in a decentralised way. The network decides and evolves organically almost. I love it.
legendary
Activity: 1036
Merit: 1000
August 13, 2015, 11:04:25 AM
It would be a recognition that the block size limit is not part of the consensus layer, but rather part of the transport layer, as sickpig suggested:

you know what I can't stop thinking that the max block size is a transport layer constraint that have crept in consensus layer.

The network would dynamically determine the max block size as the network evolves by expressing the size of the blocks they will accept with the drop-down menu on their client.

This seems too easy, like why wouldn't this have been thought of before. Is the idea that maybe this is one of those cases where muddled thinking (the consensus/transport layer confusion) has prevented people from seeing the obvious? I ask because I'm not sure I understand the full implications of sickpig's comment.

EDIT: I think I may get it now:

https://www.reddit.com/r/Bitcoin/comments/3eaxyk/idea_on_bitcoin_mailing_list_blocksize_freely/ctddl6h

along with why it hasn't been tried:

https://www.reddit.com/r/Bitcoin/comments/3eaxyk/idea_on_bitcoin_mailing_list_blocksize_freely/ctd812o
newbie
Activity: 28
Merit: 0
August 13, 2015, 11:03:51 AM
[...]
So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.


I'd be willing to help! But I'd also suggest to just make it about the configurable setting and leave the rest to the user. I think signalling about blocksize has to happen out-of-band for the time being. Because it is potentially a lot of code complexity. And simple IMO beats complex here.

Just make it mandatory to start bitcoind with -maxblocksizelimit (or similar) and have an edit box for bitcoin-qt that has to be filled with a value. The amount of code change should be about the same as BIP101.

Start requesting this value at some switchover date in the future - maybe at the beginning of Gavin's increase schedule. Reason for this: Time for user education on building a function Bitcoin network.

What do you think?
legendary
Activity: 1162
Merit: 1007
August 13, 2015, 11:00:27 AM
YES! See also here: https://www.reddit.com/r/Bitcoin/comments/3eaxyk/idea_on_bitcoin_mailing_list_blocksize_freely/

Instead of a pull down menu, I would favor a free form text field without any default. (For policy neutrality)

Pushes the responsibility and the power to set this limit back to the user - where it belongs.

Thanks for the link!  Sounds like this is already a thing!  We should bring more attention to this idea and iron out the details.  
legendary
Activity: 1036
Merit: 1000
August 13, 2015, 10:58:26 AM
This is a fundamental disagreement on the value of Bitcoin then which IMO is first SOV then payment network.
There is no such thing as a store of value.

You've described the religious approach to understanding money.

I'd say the term "store of value" has meaning in the context our current world of fiat money, where you need a hedge against inflation. In the case of Bitcoin while it is still not yet mainstream I think a special definition is useful: an asset that retains or grows its purchasing power over the years (particularly in contrast with fiat money), with growth of course being considered even better as a store of value than simply staying level. Also the difficulty in confiscating it should be part of its store-of-value merits.

In world where we were already using gold universally, for example, the concept of "store of value" would probably be unnecessary.

However, reading between the lines, I assume your larger point here is that these "store of value" properties rely on Bitcoin being a payment network, too, so there is no clean line where we can say, "For now Bitcoin is only an SoV, so the number of transactions people could use it for doesn't matter." Since the SoV (especially the growth aspect) in the present day owes largely to the investment premise that Bitcoin *will become* a major payment network for the world in the future, the transaction capacity going forward is a key determiner of current price upside and therefore a pivotal element of its SoV properties.
legendary
Activity: 1162
Merit: 1007
August 13, 2015, 10:56:30 AM

At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.

I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way.  What if, instead, we just make it very easy for node operators to adjust their block size limit.  Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ."  What would happen?  

Personally, I'd just select some big block size limit, like 32 MB.  This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes.  I'd expect many people to do the same thing.  Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size.  We all witness that indeed that block got included into the longest proof of work chain, and then suddenly all miners are confident producing 1.1 MB blocks.  Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.

TL/DR: maybe we don't need a strict definition for the max block size limit.

that's just a re-write of what i've been advocating; lift the limit entirely.

but yeah, your idea is great b/c it would give full node operators a sense of being in charge via a pull down menu.  i like it.

don't forget that mining pools are just huge hashing overlays of full nodes which they operate and could use to do the same type of voting.

Yes, you have been essentially advocating the same thing.  

We could take this idea further: in addition to the drop-down menu where node operators and miners select the max block size they'll accept, we could add two new features to improve communication of their decisions:

  1.  The max block size selected by a node would be written into the header of any blocks the node mines.

  2.  The P2P protocol would be extended so that nodes could poll other nodes to find out their block size limit.

This would be a highly decentralized way of coming to consensus in a very flexible and dynamic manner.  

It would be a recognition that the block size limit is not part of the consensus layer, but rather part of the transport layer, as sickpig suggested:

you know what I can't stop thinking that the max block size is a transport layer constraint that have crept in consensus layer.

The network would dynamically determine the max block size as the network evolves by expressing the size of the blocks they will accept with the drop-down menu on their client.

So…is this a good idea?  If there are no obvious "gotchas" then perhaps we should write up a BIP.
newbie
Activity: 28
Merit: 0
August 13, 2015, 10:53:42 AM

At this point I'd say just find a way to put the forks on the market and let's arbitrage it out. I will submit if a fork cannot gain the market cap advantage, and I suspect the small-blockers will likewise if Core loses it. Money talks.

I had a strange idea recently: what if we don't even bother with BIP100, BIP101, etc., or trying to come to "consensus" in some formal way.  What if, instead, we just make it very easy for node operators to adjust their block size limit.  Imagine a drop down menu where you can select "1 MB, 2 MB, 4 MB, 8 MB, … ."  What would happen?  

Personally, I'd just select some big block size limit, like 32 MB.  This way, I'd be guaranteed to follow the longest proof of work chain, regardless of what the effective block size limit becomes.  I'd expect many people to do the same thing.  Eventually, it becomes obvious that the economic majority is supporting a larger limit, and a brave miner publishes a block that is 1.1 MB is size.  We all witness that indeed that block got included into the longest proof of work chain, and then suddenly all miners are confident producing 1.1 MB blocks.  Thus, the effective block size limit slowly creeps upwards, as this process is repeated over and over as demand for block space grows.

TL/DR: maybe we don't need a strict definition for the max block size limit.

YES! See also here: https://www.reddit.com/r/Bitcoin/comments/3eaxyk/idea_on_bitcoin_mailing_list_blocksize_freely/

Instead of a pull down menu, I would favor a free form text field without any default. (For policy neutrality)

Pushes the responsibility and the power to set this limit back to the user - where it belongs.
Pages:
Jump to: