Author

Topic: Gold collapsing. Bitcoin UP. - page 120. (Read 2032248 times)

legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 14, 2015, 07:22:50 PM
I suspect we agree that should 1MB blocks become an undeniably urgent concern (EG, if we see actual congestion resulting in appropriate fees no longer prioritizing their tx) the controversy will rapidly dissipate and be replaced by emergent rough consensus.

I do not understand why do we need to increase block size if we want to "employ minfee".
Let's try to "employ minfee" and if they raise too high then we will increase block size to reduce them.

It is clear that the people here are not so far apart after all. In reality it is differences of opinion on how to reach similar goals.

What is highlighted above would be fine if we were talking about a traditional centralized system, where a few people have control of all the software instances. Of course it would be possible to wait until there is actual tx congestion or fees tracking too high, unwise maybe, but very do-able, because it would be relatively quick to upgrade and continue as before.

With a decentralized system this is a luxury which does not exist, because the many instances within it are controlled by different people with different priorities, situations, political constraints, and speak different languages. A change that affects all of them needs to be given as much time as possible to be implemented.

Ideally, Satoshi should have listened to Jeff Garzik and caveden and implemented a flexible cap with his 1MB change in 2010. Done 5 years ago, the hard-fork would be a big fat nothing event as close to 100% of the full nodes in Bitcoin's network would be upgraded before the first >1MB block. If the change was done early in 2013, when the matter was heavily discussed, it would have given a 2-year delay before activation, so perhaps 95% of the full nodes would be upgraded. This is what BIP 100 assumes is still possible, but the 1Mb has become politicized so 95% is unlikely to be achievable and a rogue miner with 6% of the hash-rate could cripple Bitcoin long-term. So Gavin's BIP 101 assumes a 75% threshold, plus a grace period to help boost numbers. This is much more realistic but a rough hard-fork is now inevitable. The worst option is to wait until the change is obviously needed and try to do what Greg thinks is easy: a 2-week hard-fork. This might be easy for Core Dev gurus, but for thousands of full nodes this will come as a major shock, maybe leaving 50% of the nodes on each fork for a while. A "battle of the forks" might be an interesting and amusing real-world scenario test for cryptogeeks, but for 99% of Bitcoiners this would be a nightmare, they would be scared about the fate of their BTC, it would cause a serious loss of faith in this paradigm of new money.

Being preemptive about the 1MB reminds me of the Y2K situation. I spent a large chunk of 1998 on this as our company had 600 programs to change (requiring individual review) in just one sub-system, which also had 20 million abbreviated dates in the db and datafiles (requiring synchronized conversion with the programs), many of which existed from the 1980s when storage of all types was at a premium. The amount of testing required was laborious. Without this work the sub-system would have failed on Jan 1st 2000 costing the company tens of millions and making the name of the IT division "useless scum" in the minds of all the users. This was a centralized instance of software under the control of a handful of people. Even then the change was done over 1 year in advance of the Y2K date, and worked beautifully on the day.

TL:DR
If a software change needs 6 months or a year to happen then get it in progress 6 months or a year before it is needed by the user-base.

legendary
Activity: 1652
Merit: 1000
July 14, 2015, 07:21:39 PM

I suspect we agree that should 1MB blocks become an undeniably urgent concern (EG, if we see actual congestion resulting in appropriate fees no longer prioritizing their tx) the controversy will rapidly dissipate and be replaced by emergent rough consensus.


And I guess we will have to ask you if the fees are "appropriate" or not.
legendary
Activity: 1652
Merit: 1000
July 14, 2015, 07:12:42 PM
TX fees are still orders of magnitude below their cost in electricity, etc., demonstrating fee pressure insufficient for develop mature markets.


I had forgotten this gem.  Roll Eyes  Fortunately your personal opinions don't matter; this is not a centrally planned economy.
legendary
Activity: 2044
Merit: 1005
July 14, 2015, 07:08:07 PM
lightening transactiosn can support up to 2 tx per day for 7 billion people, within 2TB storage space and 133MB blocks.
legendary
Activity: 1414
Merit: 1000
July 14, 2015, 05:53:49 PM
OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.

This thread can be hard to follow if you're not following it all the time!  

The question was in reference to a debate I was having with Odalv about these "order of magnitude" estimates shown in this table.  I was suggesting that, under the conditions considered in the table, it is cheaper for miners to write the spam to the Blockchain and more costly for the spammer, than continually rejecting it:



Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  

Sorry, yes I know you explained that.  The point I'm trying to make is that if CreateNewBlock is super-linear in mempool size, then it would not be surprising to see more empty blocks (what Cypher was calling "defensive blocks") when mempool swells (the miners are mining on an empty block for longer while waiting for CreateNewBlock to finish).  This was Cypher's point from the very beginning that many people, including myself, were suggesting was probably not the case!  

Furthermore, how can f2pool/antpool mine a non-empty block without calling createnewblock?

So pretty much it is more costly to the spammer if miners just write the spam (or accept the tx) into the block chain.

Interesting.

Sorry, but this cannot be true. It is like perpetuum mobile. The bigger block the cheaper it is => let's are try 1 TB block => it must be free

spammers don't control size of blocks in a no limit scenario.  miners do.  so we won't have 1TB blocks b/c miners have the incentive to not destabilize or destroy the network so they will construct large enough blocks that are efficiently optimized so as to not get orphaned and not create significant decentralization of full nodes.  they will also raise their minfee to keep their mempool from destabilizing their full nodes and to keep users access open and readily accessible.  spammers will actually have to pay instead of just recycling their unwritten spam fees.

lol, I do not understand why you do not want to create 1TB block. The bigger block the more spammers have to pay and the more miners will earn. I'll suggest to use infinite blocks and miners will earn infinite amount of $ and BTC.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 14, 2015, 05:34:46 PM
You've conveniently forgot this:

Care to make a wager, iCEBREAKER?  1 BTC that the longest proof-of-work chain contains a block larger than 1 MB by this time next year (10-Jul-2016).
If you lose, I want real Bitcoin.  If you win, expect to be paid in doublespent Gavincoins.   Cool

Did you just agree to the bet?  I can't tell.  

We'd both deposit 1 BTC to a multisig address (3rd key held by neutral party).  If the chain forks, the winner would automatically have coins spendable on both sides.  

Did you agree?

I've conveniently forgotten nothing.  Your poor reading comprehension is pitiful, so I will charitably aid you by bolding the parts you most need to sound out until comprehension sets in.

1MB blocks may become a harmful constraint by this time next year, given some black swan for fiat or rapid deployment of sidechain/Lightning.  But we're are not even on course to begin getting there yet, given the unwelcome distraction of the Gavinista insurgency inciting the get-rich-quick XT mob.  TX fees are still orders of magnitude below their cost in electricity, etc., demonstrating fee pressure insufficient for develop mature markets.


I suspect we agree that should 1MB blocks become an undeniably urgent concern (EG, if we see actual congestion resulting in appropriate fees no longer prioritizing their tx) the controversy will rapidly dissipate and be replaced by emergent rough consensus.

There's also the possibility we do get a technically and politically feasible velvet divorce, in which case we can forget all this nasty controversy about the "best" block size and start a new nasty controversy about which is the "real" Bitcoin.

legendary
Activity: 1764
Merit: 1002
July 14, 2015, 05:33:21 PM
OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.

This thread can be hard to follow if you're not following it all the time!  

The question was in reference to a debate I was having with Odalv about these "order of magnitude" estimates shown in this table.  I was suggesting that, under the conditions considered in the table, it is cheaper for miners to write the spam to the Blockchain and more costly for the spammer, than continually rejecting it:



Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  

Sorry, yes I know you explained that.  The point I'm trying to make is that if CreateNewBlock is super-linear in mempool size, then it would not be surprising to see more empty blocks (what Cypher was calling "defensive blocks") when mempool swells (the miners are mining on an empty block for longer while waiting for CreateNewBlock to finish).  This was Cypher's point from the very beginning that many people, including myself, were suggesting was probably not the case!  

Furthermore, how can f2pool/antpool mine a non-empty block without calling createnewblock?

So pretty much it is more costly to the spammer if miners just write the spam (or accept the tx) into the block chain.

Interesting.

Sorry, but this cannot be true. It is like perpetuum mobile. The bigger block the cheaper it is => let's are try 1 TB block => it must be free

spammers don't control size of blocks in a no limit scenario.  miners do.  so we won't have 1TB blocks b/c miners have the incentive to not destabilize or destroy the network so they will construct large enough blocks that are efficiently optimized so as to not get orphaned and not create significant decentralization of full nodes.  they will also raise their minfee to keep their mempool from destabilizing their full nodes and to keep users access open and readily accessible.  spammers will actually have to pay instead of just recycling their unwritten spam fees.
legendary
Activity: 1652
Merit: 1000
July 14, 2015, 05:26:40 PM
Make all the grandiose claims and populist appeals you wish.  Like you, they have no power here.

The Gavinistas will continue to discover what it means to attack a system that is defensible, diffuse, diverse, and resilient.

I hope these teachable moments will educate them on the principles of Bitcoin Sovereignty!   Smiley

If you're so sure about that, why did you chicken out when PeterR proposed the bet?

I am very sure about what I said in the above quote.

As for Peter's tangentially related bet, that's already been covered:

Care to make a wager, iCEBREAKER?  1 BTC that the longest proof-of-work chain contains a block larger than 1 MB by this time next year (10-Jul-2016).

If you lose, I want real Bitcoin.  If you win, expect to be paid in doublespent Gavincoins.   Cool

1MB blocks may become a harmful constraint by this time next year, given some black swan for fiat or rapid deployment of sidechain/Lightning.  But we're are not even on course to begin getting there yet, given the unwelcome distraction of the Gavinista insurgency inciting the get-rich-quick XT mob.  TX fees are still orders of magnitude below their cost in electricity, etc., demonstrating fee pressure insufficient for develop mature markets.

I suspect we agree that should 1MB blocks become an undeniably urgent concern (EG, if we see actual congestion resulting in appropriate fees no longer prioritizing their tx) the controversy will rapidly dissipate and be replaced by emergent rough consensus.

There's also the possibility we do get a technically and politically feasible velvet divorce, in which case we can forget all this nasty controversy about the "best" block size and start a new nasty controversy about which is the "real" Bitcoin.

Please keep up with the discussion and try not to slow down the rest of the class.  

You've conveniently forgot this:

Care to make a wager, iCEBREAKER?  1 BTC that the longest proof-of-work chain contains a block larger than 1 MB by this time next year (10-Jul-2016).
If you lose, I want real Bitcoin.  If you win, expect to be paid in doublespent Gavincoins.   Cool

Did you just agree to the bet?  I can't tell.  

We'd both deposit 1 BTC to a multisig address (3rd key held by neutral party).  If the chain forks, the winner would automatically have coins spendable on both sides.  

Did you agree?
legendary
Activity: 1414
Merit: 1000
July 14, 2015, 05:21:27 PM
OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.

This thread can be hard to follow if you're not following it all the time!  

The question was in reference to a debate I was having with Odalv about these "order of magnitude" estimates shown in this table.  I was suggesting that, under the conditions considered in the table, it is cheaper for miners to write the spam to the Blockchain and more costly for the spammer, than continually rejecting it:



Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  

Sorry, yes I know you explained that.  The point I'm trying to make is that if CreateNewBlock is super-linear in mempool size, then it would not be surprising to see more empty blocks (what Cypher was calling "defensive blocks") when mempool swells (the miners are mining on an empty block for longer while waiting for CreateNewBlock to finish).  This was Cypher's point from the very beginning that many people, including myself, were suggesting was probably not the case!  

Furthermore, how can f2pool/antpool mine a non-empty block without calling createnewblock?

So pretty much it is more costly to the spammer if miners just write the spam (or accept the tx) into the block chain.

Interesting.

Sorry, but this cannot be true. It is like perpetuum mobile. The bigger block the cheaper it is => let's are try 1 TB block => it must be free
legendary
Activity: 1764
Merit: 1002
July 14, 2015, 05:04:01 PM

The block size limit is a short term hack.  Someday we might get beyond such a limit, but it could be quite a while.  BIP100 looks promising though.

"a short term hack" is how I see it, what concerns me is developers appear to be leveraging it to push through other hacks. (hacking the hack - postponing indefinably until such time as other hard fork changes could be bundled in with this one.)

BIP100 is good in that it removes the hard fork limit, my reservations though are that it does nothing to erode the centralized control system that has evolved. I prefer Bib 101 as it implies some central gate keepers need to eat humble pie, however nether are my first choice.  

At this stage I'd like to start seeing more decentralized development, the notion that Bitcoin is resilient in that if the protocol is modified the ideals will never be eroded because it is open source and can be forked to keep the original intent appears to only be valid so long as we share the same motives as the centralized development team.

The very idea of forking that was originally proposed to protect Bitcoin Values was vehemently opposed by the centralized developers who expressed disdain that they were not consulted and their process for seeking permission to propose change was not adhere too, even going so far as calling the idea of forking to remove the hard limit a threat to the very success of bitcoin.

I think there is a distortion of perception and lack of empathy all round. ultimately it is the people who put economic energy into the idea that make it viable, and while developers are all important, they are not the gods who conduct this experiment, its the people who put in there economic energy.    

Your reasoning is interesting to me.  Mostly because your evaluation appears to contradict your conclusion.  And so, I suspect you have a some well thought out ideas and nuances that you've not yet communicated.

Both 100 and 101 provide a mechanism for more block size.  Choosing between the two may depend on your perspectives and assessment of different risk levels within the operating groups.

Do you see more centralized control among developers or miners?
- If development is more centralized, BIP100, (developers giving controls to miners).
- If mining is more centralized, BIP101, (developers retaining control over block size increases and schedules).

Both remain fairly centralized, though both are less so than they previously have been.  From your discourse, it would seem your evaluation would be the devs are more centralized and so would favor BIP100, (irrespective of who authored it).

I'm not sure I see the contradiction, my understanding is based on the situation we have now and it's a typical political one.

The moment we started to see mining pools and solo miners contributing hashing power with little regard to hard forks is when this trajectory started, I cant remember what the BIP was back in 2011/2, where miners had to choose which fork to support, back then I didn't care as it was the fundamentals that were important to me and that wasn't considered one given my limited understanding back then.  ( i just supported my "political mining pool by giving them my vote" to use as they saw fit.)

anyway I think all developers need a reason to develop and I'm happy with the idea that some will be commercial, however developers are just developing the code that runs the protocol. The people that invest in Bitcoin invest because they understand the incentive structure that makes the protocol possible.

Bitcoin is more about the network of current users than it is about the code, changing the code and protocol to appeal to old world industrious is not how we should be working, we want them to change to adopt Bitcoin.

I may be underestimating the concerns with centralized mining but I dont see it as an issue, miners will always mine the bitcoin that has the most users, and that typically is misunderstood as the most nodes, so long as miners do not have a say in changing the incentives in the protocol I see no problems moving forward with larger blocks. (Blockstream have crossed this line)

I am concerned that development is very centralized just a handful of people determine the code that runs on almost 99% of nodes, I favor many implementation of the code, not just Core, so in my view BIP100 and BIP101 are a political compromise to keep centralized development in the hands of a few.

BIP100 essentially takes the block size it out of the hands of developers and gives it to the miners.  They will decide if it grows or not.
BIP101 keeps block size as a centrally managed resource, pre-determined by developers, and if modification up or down is needed, it would need to be done by developers.

The weighting of your discussion suggests the developer centralization is a more serious concern, which suggests that BIP100 would be a strong favorite for you.

Personally I like BIP100 more also just because it does not have the hubris to attempt to predict what future changes to block size may be best suited for the protocol, and leaves those decisions to the future folks who will know better than we could possibly do now.
I also like that it decentralizes the management of the decision to the miners, which to me is a fine place for it.

That you favored BIP101 was the surprising part for me, I don't see why that would be the case considering your concerns.

that's a good point.

so what do you think about No Limit, which i would contend is a blend of the two BIPS, ie, it takes the block size determination out of the hands of core dev and puts it into the hands of a decentralized negotiated optimum btwn miners and users on a realtime basis going forward?
legendary
Activity: 1400
Merit: 1013
July 14, 2015, 04:57:01 PM
It is about a majority of brain cells and intellectual authority.
The value of money comes from future economic output.

The majority of entities that will produce the most economic value in the future is the majority that matters.

In many cases, people who have accumulated a large amount of money in the present have done so because they have a high capacity to produce economic value and so in their case they contribute significantly to the economic majority.

This is not true in all cases, however. Someone who obtained large amounts of money through luck or via processes which are not repeatable in the future don't contribute as much to the economic majority as their current holdings would suggest.

This is the primary flaw behind who try to frame debates in terms of "rich" vs "poor". They aren't being sufficiently precise.
legendary
Activity: 2492
Merit: 1473
LEALANA Bitcoin Grim Reaper
July 14, 2015, 04:48:03 PM
OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.

This thread can be hard to follow if you're not following it all the time!  

The question was in reference to a debate I was having with Odalv about these "order of magnitude" estimates shown in this table.  I was suggesting that, under the conditions considered in the table, it is cheaper for miners to write the spam to the Blockchain and more costly for the spammer, than continually rejecting it:



Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  

Sorry, yes I know you explained that.  The point I'm trying to make is that if CreateNewBlock is super-linear in mempool size, then it would not be surprising to see more empty blocks (what Cypher was calling "defensive blocks") when mempool swells (the miners are mining on an empty block for longer while waiting for CreateNewBlock to finish).  This was Cypher's point from the very beginning that many people, including myself, were suggesting was probably not the case!  

Furthermore, how can f2pool/antpool mine a non-empty block without calling createnewblock?

So pretty much it is more costly to the spammer if miners just write the spam (or accept the tx) into the block chain.

Interesting.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 14, 2015, 04:26:28 PM

The block size limit is a short term hack.  Someday we might get beyond such a limit, but it could be quite a while.  BIP100 looks promising though.

"a short term hack" is how I see it, what concerns me is developers appear to be leveraging it to push through other hacks. (hacking the hack - postponing indefinably until such time as other hard fork changes could be bundled in with this one.)

BIP100 is good in that it removes the hard fork limit, my reservations though are that it does nothing to erode the centralized control system that has evolved. I prefer Bib 101 as it implies some central gate keepers need to eat humble pie, however nether are my first choice.  

At this stage I'd like to start seeing more decentralized development, the notion that Bitcoin is resilient in that if the protocol is modified the ideals will never be eroded because it is open source and can be forked to keep the original intent appears to only be valid so long as we share the same motives as the centralized development team.

The very idea of forking that was originally proposed to protect Bitcoin Values was vehemently opposed by the centralized developers who expressed disdain that they were not consulted and their process for seeking permission to propose change was not adhere too, even going so far as calling the idea of forking to remove the hard limit a threat to the very success of bitcoin.

I think there is a distortion of perception and lack of empathy all round. ultimately it is the people who put economic energy into the idea that make it viable, and while developers are all important, they are not the gods who conduct this experiment, its the people who put in there economic energy.    

Your reasoning is interesting to me.  Mostly because your evaluation appears to contradict your conclusion.  And so, I suspect you have a some well thought out ideas and nuances that you've not yet communicated.

Both 100 and 101 provide a mechanism for more block size.  Choosing between the two may depend on your perspectives and assessment of different risk levels within the operating groups.

Do you see more centralized control among developers or miners?
- If development is more centralized, BIP100, (developers giving controls to miners).
- If mining is more centralized, BIP101, (developers retaining control over block size increases and schedules).

Both remain fairly centralized, though both are less so than they previously have been.  From your discourse, it would seem your evaluation would be the devs are more centralized and so would favor BIP100, (irrespective of who authored it).

I'm not sure I see the contradiction, my understanding is based on the situation we have now and it's a typical political one.

The moment we started to see mining pools and solo miners contributing hashing power with little regard to hard forks is when this trajectory started, I cant remember what the BIP was back in 2011/2, where miners had to choose which fork to support, back then I didn't care as it was the fundamentals that were important to me and that wasn't considered one given my limited understanding back then.  ( i just supported my "political mining pool by giving them my vote" to use as they saw fit.)

anyway I think all developers need a reason to develop and I'm happy with the idea that some will be commercial, however developers are just developing the code that runs the protocol. The people that invest in Bitcoin invest because they understand the incentive structure that makes the protocol possible.

Bitcoin is more about the network of current users than it is about the code, changing the code and protocol to appeal to old world industrious is not how we should be working, we want them to change to adopt Bitcoin.

I may be underestimating the concerns with centralized mining but I dont see it as an issue, miners will always mine the bitcoin that has the most users, and that typically is misunderstood as the most nodes, so long as miners do not have a say in changing the incentives in the protocol I see no problems moving forward with larger blocks. (Blockstream have crossed this line)

I am concerned that development is very centralized just a handful of people determine the code that runs on almost 99% of nodes, I favor many implementation of the code, not just Core, so in my view BIP100 and BIP101 are a political compromise to keep centralized development in the hands of a few.

BIP100 essentially takes the block size it out of the hands of developers and gives it to the miners.  They will decide if it grows or not.
BIP101 keeps block size as a centrally managed resource, pre-determined by developers, and if modification up or down is needed, it would need to be done by developers.

The weighting of your discussion suggests the developer centralization is a more serious concern, which suggests that BIP100 would be a strong favorite for you.

Personally I like BIP100 more also just because it does not have the hubris to attempt to predict what future changes to block size may be best suited for the protocol, and leaves those decisions to the future folks who will know better than we could possibly do now.
I also like that it decentralizes the management of the decision to the miners, which to me is a fine place for it.

That you favored BIP101 was the surprising part for me, I don't see why that would be the case considering your concerns.
legendary
Activity: 1162
Merit: 1004
July 14, 2015, 03:17:50 PM

[extensive wailing]


[gnashing of teeth]


The 1MB "short term hack" got Bitcoin to where it is today.

Go ahead and make all the noise you want.  Stamp your feet and hold your breath.

Enlist Reddit as your personal army.  Issue threats.  Spin up XT nodes.

It won't make a difference; the block size is staying at 1MB for the foreseeable future.

Nobody except your fellow Gavinistas cares how many times you fatuously repeat 'ZOMG TEMPORARY 1MB LIMIT IS TEMPORARY RAISE IT NOW.'


It won't stay, because the Gavinistas are the majority and you represent a minority.

Since when Bitcoin is about majority?!

It is about a majority of brain cells and intellectual authority.
legendary
Activity: 1372
Merit: 1000
July 14, 2015, 02:24:12 PM
I support Satoshi's ideas.

I have never heard that Satoshi propose exponential growth of block. (doubling size every 2 years)

that's a compromise Gavin did for you folk, Satoshi didn't support a limit at all.

The block size limit is a short term hack.  Someday we might get beyond such a limit, but it could be quite a while.  BIP100 looks promising though.

"a short term hack" is how I see it, what concerns me is developers appear to be leveraging it to push through other hacks. (hacking the hack - postponing indefinably until such time as other hard fork changes could be bundled in with this one.)

BIP100 is good in that it removes the hard fork limit, my reservations though are that it does nothing to erode the centralized control system that has evolved. I prefer Bib 101 as it implies some central gate keepers need to eat humble pie, however nether are my first choice.  

At this stage I'd like to start seeing more decentralized development, the notion that Bitcoin is resilient in that if the protocol is modified the ideals will never be eroded because it is open source and can be forked to keep the original intent appears to only be valid so long as we share the same motives as the centralized development team.

The very idea of forking that was originally proposed to protect Bitcoin Values was vehemently opposed by the centralized developers who expressed disdain that they were not consulted and their process for seeking permission to propose change was not adhere too, even going so far as calling the idea of forking to remove the hard limit a threat to the very success of bitcoin.

I think there is a distortion of perception and lack of empathy all round. ultimately it is the people who put economic energy into the idea that make it viable, and while developers are all important, they are not the gods who conduct this experiment, its the people who put in there economic energy.    

Your reasoning is interesting to me.  Mostly because your evaluation appears to contradict your conclusion.  And so, I suspect you have a some well thought out ideas and nuances that you've not yet communicated.

Both 100 and 101 provide a mechanism for more block size.  Choosing between the two may depend on your perspectives and assessment of different risk levels within the operating groups.

Do you see more centralized control among developers or miners?
- If development is more centralized, BIP100, (developers giving controls to miners).
- If mining is more centralized, BIP101, (developers retaining control over block size increases and schedules).

Both remain fairly centralized, though both are less so than they previously have been.  From your discourse, it would seem your evaluation would be the devs are more centralized and so would favor BIP100, (irrespective of who authored it).

I'm not sure I see the contradiction, my understanding is based on the situation we have now and it's a typical political one.

The moment we started to see mining pools and solo miners contributing hashing power with little regard to hard forks is when this trajectory started, I cant remember what the BIP was back in 2011/2, where miners had to choose which fork to support, back then I didn't care as it was the fundamentals that were important to me and that wasn't considered one given my limited understanding back then.  ( i just supported my "political mining pool by giving them my vote" to use as they saw fit.)

anyway I think all developers need a reason to develop and I'm happy with the idea that some will be commercial, however developers are just developing the code that runs the protocol. The people that invest in Bitcoin invest because they understand the incentive structure that makes the protocol possible.

Bitcoin is more about the network of current users than it is about the code, changing the code and protocol to appeal to old world industrious is not how we should be working, we want them to change to adopt Bitcoin.

I may be underestimating the concerns with centralized mining but I dont see it as an issue, miners will always mine the bitcoin that has the most users, and that typically is misunderstood as the most nodes, so long as miners do not have a say in changing the incentives in the protocol I see no problems moving forward with larger blocks. (Blockstream have crossed this line)

I am concerned that development is very centralized just a handful of people determine the code that runs on almost 99% of nodes, I favor many implementation of the code, not just Core, so in my view BIP100 and BIP101 are a political compromise to keep centralized development in the hands of a few.



 
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 14, 2015, 01:50:52 PM
I support Satoshi's ideas.

I have never heard that Satoshi propose exponential growth of block. (doubling size every 2 years)

that's a compromise Gavin did for you folk, Satoshi didn't support a limit at all.

The block size limit is a short term hack.  Someday we might get beyond such a limit, but it could be quite a while.  BIP100 looks promising though.

"a short term hack" is how I see it, what concerns me is developers appear to be leveraging it to push through other hacks. (hacking the hack - postponing indefinably until such time as other hard fork changes could be bundled in with this one.)

BIP100 is good in that it removes the hard fork limit, my reservations though are that it does nothing to erode the centralized control system that has evolved. I prefer Bib 101 as it implies some central gate keepers need to eat humble pie, however nether are my first choice.  

At this stage I'd like to start seeing more decentralized development, the notion that Bitcoin is resilient in that if the protocol is modified the ideals will never be eroded because it is open source and can be forked to keep the original intent appears to only be valid so long as we share the same motives as the centralized development team.

The very idea of forking that was originally proposed to protect Bitcoin Values was vehemently opposed by the centralized developers who expressed disdain that they were not consulted and their process for seeking permission to propose change was not adhere too, even going so far as calling the idea of forking to remove the hard limit a threat to the very success of bitcoin.

I think there is a distortion of perception and lack of empathy all round. ultimately it is the people who put economic energy into the idea that make it viable, and while developers are all important, they are not the gods who conduct this experiment, its the people who put in there economic energy.    

Your reasoning is interesting to me.  Mostly because your evaluation appears to contradict your conclusion.  And so, I suspect you have a some well thought out ideas and nuances that you've not yet communicated.

Both 100 and 101 provide a mechanism for more block size.  Choosing between the two may depend on your perspectives and assessment of different risk levels within the operating groups.

Do you see more centralized control among developers or miners?
- If development is more centralized, BIP100, (developers giving controls to miners).
- If mining is more centralized, BIP101, (developers retaining control over block size increases and schedules).

Both remain fairly centralized, though both are less so than they previously have been.  From your discourse, it would seem your evaluation would be the devs are more centralized and so would favor BIP100, (irrespective of who authored it).

my take on this is that BIP 101 would be more favorable b/c it doesn't involve "voting" twice, imo.  formal vote once and then again with the block versioning.  example might be they formally vote yes and then change their minds and vote no thru versioning for whatever reason.  giving miners that much say doesn't seem proportionate to everyone else's (nodes, merchants, users) participation.

i like 101 b/c it is automated and imo attempts to remove as much core dev decision making in the process.  knowing Gavin, he'd like to remove himself from the process as much as possible, except for routine maintenance on core.  he has the big picture and all his actions to date have demonstrated a willingness to let Bitcoin Run with a hands off approach as well as from a charity perspective as demonstrated by giving away 10000 BTC, Satoshi's trust in him, his refunding of BTCGuild for lost rewards from 0.8.1, and his overall demeanor and presentation.

personally, i like No Limit b/c of the free mkt dynamic it creates by placing all the control on to a user/miner negotiation.  miners have every incentive to prevent bloat and adverse network affects from that. 

Fundamentally, anyone with a vote, could also sell their vote.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
July 14, 2015, 01:49:14 PM
(Matonis is clueless to claim there is a fee-market - though my estimation of him is now down to zero as he talks about monkeying with the 21M limit).
+KB blocks now.

yep. 

his whole thesis is, "ZOMG, if we yield on the block size limit, it's a foregone conclusion we'll get a supply increase!"

furtheremore, listening to him dissemble over the technicalities of the block size limit is painful to listen to.  as little understanding as iCEBlow.

Well, that was sort of just him trolling, (but he won't call it that), call it link-bait or whatever, but Matonis never really said any such thing.
Lawyerly parsing of his statements just say that many of the the arguments are in a similar form, not that he advocates for either.

I called him on it directly also.
legendary
Activity: 2044
Merit: 1005
July 14, 2015, 01:43:16 PM
any dev here want to work on adding a turing complete scripting platform (already existing, not ethereum) to the bitcoin core through a currently existing blockchain implementation? It will act as a testbed for bitcoin rolling forward. You will be compensated in existing blockchain tokens fairly. Looking for someone with good core bitcoin knowledge, it is a bit of work and it will truly decentralize things including all bitcoin clones (no need for sidechains). So if you want to take part in something that really will change things for the better let me know.
legendary
Activity: 1764
Merit: 1002
July 14, 2015, 01:13:13 PM
(Matonis is clueless to claim there is a fee-market - though my estimation of him is now down to zero as he talks about monkeying with the 21M limit).
+KB blocks now.

his whole thesis is, "ZOMG, if we yield on the block size limit, it's a foregone conclusion we'll get a supply increase!"

furtheremore, listening to him dissemble over the technicalities of the block size limit is painful to listen to.  as little understanding as iCEBlow.

...says the hobbyist who frequently argues unsuccessfully with core devs about how Bitcoin works under the hood.

you mean the parts where nullc says, "UTXO is not in RAM", but yet we see this from him just 2 mo ago?:

https://www.reddit.com/r/Bitcoin/comments/35asg6/gavin_andresen_utxo_uhoh/cr2za45

or this part?:

OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.

This thread can be hard to follow if you're not following it all the time!  

The question was in reference to a debate I was having with Odalv about these "order of magnitude" estimates shown in this table.  I was suggesting that, under the conditions considered in the table, it is cheaper for miners to write the spam to the Blockchain and more costly for the spammer, than continually rejecting it:



Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  

Sorry, yes I know you explained that.  The point I'm trying to make is that if CreateNewBlock is super-linear in mempool size, then it would not be surprising to see more empty blocks (what Cypher was calling "defensive blocks") when mempool swells (the miners are mining on an empty block for longer while waiting for CreateNewBlock to finish). This was Cypher's point from the very beginning that many people, including myself, were suggesting was probably not the case!  

Furthermore, how can f2pool/antpool mine a non-empty block without calling createnewblock?
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
July 14, 2015, 12:48:44 PM
Make all the grandiose claims and populist appeals you wish.  Like you, they have no power here.

The Gavinistas will continue to discover what it means to attack a system that is defensible, diffuse, diverse, and resilient.

I hope these teachable moments will educate them on the principles of Bitcoin Sovereignty!   Smiley

If you're so sure about that, why did you chicken out when PeterR proposed the bet?

I am very sure about what I said in the above quote.

As for Peter's tangentially related bet, that's already been covered:

Care to make a wager, iCEBREAKER?  1 BTC that the longest proof-of-work chain contains a block larger than 1 MB by this time next year (10-Jul-2016).

If you lose, I want real Bitcoin.  If you win, expect to be paid in doublespent Gavincoins.   Cool

1MB blocks may become a harmful constraint by this time next year, given some black swan for fiat or rapid deployment of sidechain/Lightning.  But we're are not even on course to begin getting there yet, given the unwelcome distraction of the Gavinista insurgency inciting the get-rich-quick XT mob.  TX fees are still orders of magnitude below their cost in electricity, etc., demonstrating fee pressure insufficient for develop mature markets.

I suspect we agree that should 1MB blocks become an undeniably urgent concern (EG, if we see actual congestion resulting in appropriate fees no longer prioritizing their tx) the controversy will rapidly dissipate and be replaced by emergent rough consensus.

There's also the possibility we do get a technically and politically feasible velvet divorce, in which case we can forget all this nasty controversy about the "best" block size and start a new nasty controversy about which is the "real" Bitcoin.

Please keep up with the discussion and try not to slow down the rest of the class. 
Jump to: