Pages:
Author

Topic: Bitcoin 20MB Fork - page 37. (Read 154787 times)

hero member
Activity: 1276
Merit: 622
February 24, 2015, 07:05:08 AM
...

Yes, but in the case of Uber there is an option for scalability. If the prices are in surge for a longer period (meaning there is increasing demand), more people will sign up to become drivers because it's profitable.

Bitcoin at 1MB hard cap has no such option. I agree that the priority based on fees is a good thing for the occasional peak in the number of transactions. But not if the system is at 100% percent all the time. All we can have is about 3-4x as many transactions than what we have today. And that's it.

How would you solve the problem? Or do you think this is OK? Bitcoin stays at 7 TPS forever. And only people willing to pay the fees can use it.

In this case you have answered the question of why we need altcoins Wink

sidechains?

Yes, third party payment providers and sidechains are the solution for the mass adoption (by mass I mean your 'can hardly use a cell phone' grandma). But you have to put your trust into the security of the payment provider and/or the sidechain. Let's face it: private keys are safer in the hands of a trustworthy third party than your grandma. But many are unwilling to trust anyone else...

I still feel the main blockchain will need to increase the hard cap at some point. Not by a lot, to prevent bloating, but still... We could put rules in place that would safely raise the limit higher if we really need additional transactions on the main blockchain.
legendary
Activity: 1260
Merit: 1002
February 24, 2015, 05:35:54 AM
oh please.. is that the best ya got?
since when you qualify for 'complicated stuff'?

edit: +1@icebreaker +1@newliberty for the weighted arguments (+1@danielpbarron for teh lulz Lips sealed)

Well I thought it's obvious that it was an ironic comment about the "complicated stuff".

As for the weighted arguments...well "Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past." is far from being weighted. It's simply biased. Blockchain shouldn't be priced at market rates. It should be open just like any other protocol. Icetard and you seem to fail to understand that Bitcoin is a protocol and it should follow the rules of a regular protocol like TCP/IP.

right, how about the adoption rate of the tcp/ip protocol?
40+ years since its inception yet there is still not all humanityTM using internet.. (hardly 3Bn people)

Edit: plus you actually pay for using tcp/ip protocol.. Bandwith, domain registration, hosting, vps, not to mention the expensive HW interface such as computers, etc.. Nothing is free.
legendary
Activity: 1904
Merit: 1007
February 24, 2015, 05:29:01 AM
oh please.. is that the best ya got?
since when you qualify for 'complicated stuff'?

edit: +1@icebreaker +1@newliberty for the weighted arguments (+1@danielpbarron for teh lulz Lips sealed)

Well I thought it's obvious that it was an ironic comment about the "complicated stuff".

As for the weighted arguments...well "Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past." is far from being weighted. It's simply biased. Blockchain shouldn't be priced at market rates. It should be open just like any other protocol. Icetard and you seem to fail to understand that Bitcoin is a protocol and it should follow the rules of a regular protocol like TCP/IP.
legendary
Activity: 1260
Merit: 1002
February 24, 2015, 05:17:13 AM
...when Bitcoin can't cope and grinds to a halt. 

There is no grinding to a halt.  The worse case scenario of not resolving this in a strategic way, or not forking for a larger max block size is that Bitcoin will continue to do what it already does better than anything else.  It does not "grind to a halt".  If your transaction is urgent, you pay a fee.

The current proposal from Gavin also guarantees that there will be more hard forks.  The proposal is a guess at what might be needed, it does not measure the block chain in any way to determine how to set the right block size.

Sell your fear elsewhere.

Exactly.  Gavin's GigabloatCoin is being sold to us based on the fear that plain old Bitcoin will explode if people actually start using it.

No such thing will happen so long as we keep the system antifragile, which includes maintaining a defensible/diverse/diffuse/resilient network.

Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past.

It's just like Uber.  If too many people want to use the service, Surge Pricing kicks in and cheapskates can wait around for a taxi or bus instead.

Yes, but in the case of Uber there is an option for scalability. If the prices are in surge for a longer period (meaning there is increasing demand), more people will sign up to become drivers because it's profitable.

Bitcoin at 1MB hard cap has no such option. I agree that the priority based on fees is a good thing for the occasional peak in the number of transactions. But not if the system is at 100% percent all the time. All we can have is about 3-4x as many transactions than what we have today. And that's it.

How would you solve the problem? Or do you think this is OK? Bitcoin stays at 7 TPS forever. And only people willing to pay the fees can use it.

In this case you have answered the question of why we need altcoins Wink

sidechains?
hero member
Activity: 1276
Merit: 622
February 24, 2015, 05:13:52 AM
...when Bitcoin can't cope and grinds to a halt.  

There is no grinding to a halt.  The worse case scenario of not resolving this in a strategic way, or not forking for a larger max block size is that Bitcoin will continue to do what it already does better than anything else.  It does not "grind to a halt".  If your transaction is urgent, you pay a fee.

The current proposal from Gavin also guarantees that there will be more hard forks.  The proposal is a guess at what might be needed, it does not measure the block chain in any way to determine how to set the right block size.

Sell your fear elsewhere.

Exactly.  Gavin's GigabloatCoin is being sold to us based on the fear that plain old Bitcoin will explode if people actually start using it.

No such thing will happen so long as we keep the system antifragile, which includes maintaining a defensible/diverse/diffuse/resilient network.

Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past.

It's just like Uber.  If too many people want to use the service, Surge Pricing kicks in and cheapskates can wait around for a taxi or bus instead.

Yes, but in the case of Uber there is an option for scalability. If the prices are in surge for a longer period (meaning there is increasing demand), more people will sign up to become drivers because it's profitable.

Bitcoin at 1MB hard cap has no such option. I agree that the priority based on fees is a good thing for the occasional peak in the number of transactions. But not if the system is at 100% percent all the time. All we can have is about 3-4x as many transactions than what we have today. And that's it.

How would you solve the problem? Or do you think this is OK? Bitcoin stays at 7 TPS forever. And only people willing to pay the fees can use it.

In this case you have answered the question of why we need altcoins Wink
hero member
Activity: 840
Merit: 1002
Simcoin Developer
February 24, 2015, 05:10:43 AM
No it limits the severity of the damage that such an attack can cause.  It provides an upper limit not a prevention.

That's why a sliding window with "current size x 10" limit looks a lot better than trying to guess an arbitrary curve, like Gavin proposes.

If no better method is implemented, we should at least use this one, instead of a simple exponential curve.
legendary
Activity: 1260
Merit: 1002
February 24, 2015, 04:26:28 AM
Exactly.  Gavin's GigabloatCoin is being sold to us based on the fear that plain old Bitcoin will explode if people actually start using it.
No such thing will happen so long as we keep the system antifragile, which includes maintaining a defensible/diverse/diffuse/resilient network.
Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past.
It's just like Uber.  If too many people want to use the service, Surge Pricing kicks in and cheapskates can wait around for a taxi or bus instead.

Go and support start-ups that fail and lose tens of millions of $ from its customers! It's what you do best!

Stop involving in complicated stuff like the increase of the block size.

oh please.. is that the best ya got?
since when you qualify for 'complicated stuff'?

edit: +1@icebreaker +1@newliberty for the weighted arguments (+1@danielpbarron for teh lulz Lips sealed)
legendary
Activity: 1904
Merit: 1007
February 24, 2015, 04:23:24 AM
Exactly.  Gavin's GigabloatCoin is being sold to us based on the fear that plain old Bitcoin will explode if people actually start using it.
No such thing will happen so long as we keep the system antifragile, which includes maintaining a defensible/diverse/diffuse/resilient network.
Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past.
It's just like Uber.  If too many people want to use the service, Surge Pricing kicks in and cheapskates can wait around for a taxi or bus instead.

Go and support start-ups that fail and lose tens of millions of $ from its customers! It's what you do best!

Stop involving in complicated stuff like the increase of the block size.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 24, 2015, 04:18:47 AM
...when Bitcoin can't cope and grinds to a halt.  

There is no grinding to a halt.  The worse case scenario of not resolving this in a strategic way, or not forking for a larger max block size is that Bitcoin will continue to do what it already does better than anything else.  It does not "grind to a halt".  If your transaction is urgent, you pay a fee.

The current proposal from Gavin also guarantees that there will be more hard forks.  The proposal is a guess at what might be needed, it does not measure the block chain in any way to determine how to set the right block size.

Sell your fear elsewhere.

Exactly.  Gavin's GigabloatCoin is being sold to us based on the fear that plain old Bitcoin will explode if people actually start using it.

No such thing will happen so long as we keep the system antifragile, which includes maintaining a defensible/diverse/diffuse/resilient network.

Under a full load, blockchain space/priority will simply become priced at market rates instead of being subsidized like in the past.

It's just like Uber.  If too many people want to use the service, Surge Pricing kicks in and cheapskates can wait around for a taxi or bus instead.
legendary
Activity: 924
Merit: 1132
February 24, 2015, 02:59:47 AM
I just don't think we can afford to wait for sidechains. 

Sidechains require some substantial development and a whole lot of thought given to attacks and testing security.  Someone might be able to come up with something that "works" - ie, you can cryptographically prove transfer from one chain to another -- in a month or two, but it could be a couple of years before we've really considered it hard enough to be confident that we have thought of and effectively countered the security risks. 

The thing with side chains is they are COMPLICATED.  There are a dozen new protocol features we'd have to get right, and the sequences available among those dozen, plus the already-existing, give potential attackers hundreds, or even thousands, of things they could try.  And we have to sit down, and go through those things, and say, okay, what could an attacker do with this sequence?  And what would he have to gain?  And what would it cost him?  And what would it cost the node operators?  The miners?  The users?  And how would the network react to the attack?  And if that's unacceptable, is there an easy way to limit the risk?  And if there is, can we limit the risk without impairing the functionality of some other part of the protocol?  And do the miners' motivations balance out here to make sure the side chains get decent mining coverage for security?  If they're merge mined with the main chain, how does that affect them? 

Supporting side chains is a hell of a lot more work -- design and security work, I mean -- than just getting them running.  I think, in the best case, sidechains that get deployed without a security failure or a major attack, and which don't fizzle out due to lack of mining, are a couple years away.  The dev work is relatively easy; but the design work and the security work will take that long.  And so, for now, we are discussing something a lot more simple that exposes very little new attack surface. 

A limit increase is relatively simple.  It gives an attacker exactly ONE new move, the attacker has to have major infrastructure committed to do it often enough to matter, and the fact that a limit still exists restricts the amount of damage an attacker can do with that move. 

legendary
Activity: 1330
Merit: 1000
February 24, 2015, 02:19:51 AM
The limit could be too low.  Ridiculously high may not be high enough.
Bitcoin could become wildly successful much sooner than expected.

I agree.  And the framework I posted earlier accounts for this.

My bias is to "get big fast"

Mine is too.  But it is entirely possible that exponential growth over the next twenty years will simply not be fast enough.  If you look five years out, for instance, we would be at 67 MB per block, which is roughly 20 million transactions per day.  Assuming some small fraction (1%) of people in developed countries are using Bitcoin regularly (every other day) by that time, this limit would create significant pressure for adoption of sidechains or off-chain services.  Once you assume sidechains, what is the marginal benefit of maintaining a node for a huge main blockchain that consumes half of your internet connection?

My point is, probably most of the scaling will be done on a sidechain maintained in datacenters.  And probably you won't care about that, if it's just used to store your lunch money.  But you do care about the main chain, even if you only use it once a month.  So that's what you want to be as decentralized as possible, running on as many nodes as possible, even if it still has to grow quite a bit in order to be accessible.  There's no ideal way to have a single chain that grows and remains decentralized.  But two chains could do it.
sr. member
Activity: 532
Merit: 251
February 24, 2015, 01:34:35 AM
bah i still think I didn't explain in a way you will realized the clear point I am trying to make...

you will keep telling me I am off topic, because you think i am talking about bananas.  But really you fail to realize this topic and Ideal Money are about the same dilemna.

Ideal Money is the solution to your conflict in this thread.

There is an elephant, behind you, and I want you to look, but you are too busy looking for an elephant you tell me, so you won't look.

But the elephant is there, and you are looking for it, so you should turn around.

Does anyone understand what I mean to point out?  Someone can rearrange the language?
sr. member
Activity: 532
Merit: 251
February 24, 2015, 01:30:12 AM
In the mentioned attack, the preponderance of miners can be so conscripted.  The smaller pools will disappear.
The internet is a fuzzy machine that messes up my typing.  I keep telling you the lecture series Ideal money is about bitcoin and specifically this thread.  And you keep telling me it is not, and that it is about something else.

I don't think we speak the same language but I will try a few different ways.

Ideal Money is not about bananas.  It is about this decision of block size.  I know you don't believe me but you are not familiar with the lecture series.

You linked mises.org but although it might actually be related to this thread, it is not ACTUALLY about this thread.

The lecture series Ideal Money is ACTUALLY about this thread. 

I don't know how to be clearer about what I am saying? is there no one that reads this words correctly?

I am not off topic, Ideal Money and this thread are not "related', they are infact the same topic.

I don't understand how you can't read these words in their context that I write.  The topic in this thread is "ideal Money', and your conjectures on it are misinformed, clearly.

People are telling me to get a life but they are not well read.

Meanwhile the Greek finance minister is getting ready to peg greek currency to bitcoin through smart contracts, likely because he attended the lecture Ideal Money that was given in greece, and I'm not sure you have a clue what is going on.

Nobody here knows what bitcion is until they read the supporting literature for it: Ideal Money. 

Ideal Money happens when all major currencies stabilize in relation to bitcoin (summary)
Asymptotically Ideal Money is bitcoin, as perfectly explained in the lectures...

Now to be clear again, I didn't say Asymptotically Ideal Money is LIKE bitcoin, I said it IS bitcoin...I'm not sure you understand what point I am making and I don't see why other than you are addicted to fighting in unsolvable conflicts.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
February 24, 2015, 12:26:10 AM
I'm with you there.  It also makes the attack orders of magnitude less costly though (because the cost is primarily the orphan cost).

Not really.  IBLT only produces O(1) propagation times if all nodes already know about the transactions in question.  If an attacker colludes with a miner to fill giant blocks with spam then the miner loses O(1) propagation times.  So the attack miner's orphan rates skyrocket at a time when honest miners see their orphan rates fall.  Since the true cost of orphans is the relative difference this would make those type of attacks far more expensive.

With IBLT the upper safe block size becomes limited by the internode bandwidth.  A miner can't necessarily know the txn set in the memory pool of all nodes but he can guess.  If txn volume exceeds full node resources nodes will need to drop some transactions.  They will find it most efficient to drop those transactions least likely to be included in a block.  That means sorting txns by fees and priority just like miners do.  So miners will want to pick a subset of those sorted transactions which meets the bandwidth requirements of most nodes.   It doesn't need to be perfect but there is an inflection point below which orphan rate is essentially flat relative to size and above which it increases linearly.  The miner may not know the exact inflection point but being conservative (smaller block) effectively costs nothing at least while the subsidy is high but being too aggressive can be very expensive.   Since the fees paid are unlikely to compensate for a linear growth in orphan probability there is no economic incentive to mine above that point.  Uncertainty combined non-linear risk reward means they will probably underestimate not overestimate in order to compensate for uncertainty otherwise the miner is simply working harder and taking more risk for less net income.

In the mentioned attack, the preponderance of miners can be so conscripted.  The smaller pools will disappear.
legendary
Activity: 1904
Merit: 1007
February 24, 2015, 12:22:42 AM
Bitcoin is money.
The blockchain is big enough as it is right now.
The huge blockchain is driving people away from the full node.

Bitcoin is a protocol, not money because you want it to be so. Move along.

MP is certainly no fan of altcoins.  Neither is Theymos or tvbcof or NewLibery or sardokan or davout or Pete Dushenski.

I missed the reason why davout opposes the hard fork. He just likes to be against it because it's hip.

IT'S NOT THE FUCKING "HARDWARE REQUIREMENTS" IT'S THE FUCKING BANDWIDTH.

It's not only the fucking USA that counts in the Bitcoin world. Other countries have cheap bandwidth costs.

GET IT THROUGH YOUR THICK FUCKING SKULL! Tard, go and support some failed start ups because that's what you do best!
donator
Activity: 1218
Merit: 1079
Gerald Davis
February 23, 2015, 11:25:05 PM
I'm with you there.  It also makes the attack orders of magnitude less costly though (because the cost is primarily the orphan cost).

Not really.  IBLT only produces O(1) propagation times if all nodes already know about the transactions in question.  If an attacker colludes with a miner to fill giant blocks with spam then the miner loses O(1) propagation times.  So the attack miner's orphan rates skyrocket at a time when honest miners see their orphan rates fall.  Since the true cost of orphans is the relative difference this would make those type of attacks far more expensive.

With IBLT the upper safe block size becomes limited by the internode bandwidth.  A miner can't necessarily know the txn set in the memory pool of all nodes but he can guess.  If txn volume exceeds full node resources nodes will need to drop some transactions.  They will find it most efficient to drop those transactions least likely to be included in a block.  That means sorting txns by fees and priority just like miners do.  So miners will want to pick a subset of those sorted transactions which meets the bandwidth requirements of most nodes.   It doesn't need to be perfect but there is an inflection point below which orphan rate is essentially flat relative to size and above which it increases linearly.  The miner may not know the exact inflection point but being conservative (smaller block) effectively costs nothing at least while the subsidy is high but being too aggressive can be very expensive.   Since the fees paid are unlikely to compensate for a linear growth in orphan probability there is no economic incentive to mine above that point.  Uncertainty combined non-linear risk reward means they will probably underestimate not overestimate in order to compensate for uncertainty otherwise the miner is simply working harder and taking more risk for less net income.

legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
February 23, 2015, 11:13:36 PM
Don't forget that IBLT (set reconciliation) will reduce the size of new block announcements by about two orders of magnitude. So 20MB blocks could take 200KB (although in practice the minimum IBLT might be a bit larger).  Only node bootstrapping and resync will need full size blocks sent on the network. A lot of dev work is going on so maybe this will happen soon after Gavin's v4 blocks. The relay service also reduces block size (by about 80-90%, I think I remember reading), and this is already live.

The efficiency of this will improve over time, and it will also hugely reduce one class of spamming, where a miner keeps a lot of spam transactions secret and eventually blasts out a large solved block full of them.

I'm with you there.  It also makes the attack orders of magnitude less costly though (because the cost is primarily the orphan cost).
The biggest unknown in all this guessing is probably the rate of growth of Bitcoin.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 23, 2015, 09:17:41 PM
Don't forget that IBLT (set reconciliation) will reduce the size of new block announcements by about two orders of magnitude. So 20MB blocks could take 200KB (although in practice the minimum IBLT might be a bit larger).  Only node bootstrapping and resync will need full size blocks sent on the network. A lot of dev work is going on so maybe this will happen soon after Gavin's v4 blocks. The relay service also reduces block size (by about 80-90%, I think I remember reading), and this is already live.

The efficiency of this will improve over time, and it will also hugely reduce one class of spamming, where a miner keeps a lot of spam transactions secret and eventually blasts out a large solved block full of them.
legendary
Activity: 1204
Merit: 1002
Gresham's Lawyer
February 23, 2015, 07:49:44 PM
With all due respect, contrast this limit (or any limit) with unlimited.

Nobody (well nobody with influence to make it happen) has proposed unlimited blocks.  20MB is just as finite as 1MB and so is 1GB.

Quote
It does indeed prevent spam attacks.
No it limits the severity of the damage that such an attack can cause.  It provides an upper limit not a prevention.  Many corporate wire account have an upper limit on the value of wire's that can be sent in one day.  That doesn't prevent a fraudulent wire transfer but it does prevent the most the company could lose due to such fraud.  The bean counters at a company will impose a limit that balances the needs of the company vs the loss the company could face.  If the upper bound of the loss is less than what would cripple the company than no single compromise could bring down the company.  The block limit is the same thing.  It is saying "worst case scenario, how much damage are we talking about?".  How much is a good question to ask but you have to be asking the right question.  The question of what limit prevents spam is the wrong question but the limit doesn't prevent spam.
We disagree and here's where.
It is the difference between preventing spam and preventing a spam attack.

Take your example of wire transfers.  If initiating a wire transfer cost less to the initiator than it does in process-costs to the institution effecting it, and all of the counter-party institutions have to put forth some effort for no benefit and only cost, it would be similar.  A competing bank could then flood another bank's wire transfer process with a very high load of transfers.  There would be costs driving down all of its competition at only a tiny cost.  This is what miners can do.  The fees they pay to themselves cost them nothing, but EVERYONE has to store those transactions FOREVER.

The limit does nothing to prevent spam.  Having a limit prevents using it for attacks on Bitcoin.  You may think of it as an economic game-theory problem.  If this were not a problem, then we could have unlimited blocks.  In my nomenclature, that is a phase 3 solution.

Quote
and the proposal is for 16x the current risk and x16000 over 20 years.
In nominal terms but then again in nominal terms but the cost per unit of bandwidth (and cpu time, memory, and storage as well) falls over time.  I mean even 1MB blocks would have been massive 20 years ago as well when the most common form of connectivity was a 56K modem.

So the problem could expressed as both a short term and longer term problem.  What is the maximum block size that could be created today without exceeding the bandwidth resources of a well connected node?  If it is x and bandwidth availability per unit of cost increases by y per year, then in n years a block size of x*(1+y)^n presents no more of a burden than a block size of x today.

For the record I think Gavin's proposal is too aggressive.  It uses Moore's law but bandwidth has trailed moore's law.  A 20% YOY increase more closely resembles bandwidth availability over the last 20 years.  Also 20MB as "x" is pretty aggressive as well.  So something like 11MB * 1.2^n gets us to the same place (~16B) in 50 years instead of 20 and with a higher confidence that bandwidth requirements will grow slower than bandwidth availability.  Still I got a little off track no matter what limit is adopted it doesn't prevent spam.  Economics and relaying rules prevent spam.

Yeah, I remember when 1200 baud was a good thing.  Gavin started with Moore's law but has now hewed closer to Neilson's law (which exhibits the slower growth you describe).

From my reading of him, he expects to rely on miners to continue to use lower limits.  I do not trust this reliance so much.  There are very real threat models when you start to consider the self interests of both Bitcoin economic interests, as well as non-Bitcoin economic interests.  If you think "its not possible because logistics/economics/practicalities...read just a few more paragraphs.  

I don't generally like discussing threat vectors in public forums.  Bad people sometimes get ideas and do stupid things.  CitizenFour let us know that there aren't really any private forums anyhow so...
One of the other effects that doesn't get discussed is that this could very viably open up new service models for miners.  They may do some sort of bulk package where some transaction initiator pays for unlimited transactions and fills every block a miner can win.   The marginal cost to the miner is the orphan cost...  but if this were also done with all the other miners, that marginal cost is quite low.  In this way all the block space could be bought for a de-minimus amount of fiat, and it would be in each one of the miner's interests to do so.

So if you imagine that there may be some fiat-based entity that feels an existential threat looming from Bitcoin and would just love to strangle it in its cradle...  Bad people could do bad things with a too-large limit.  This is just a small sample of what an evil mind may contemplate.  This is something that should have careful handling.  Queueing up some transactions and filling some blocks with some transaction bidding is also quite bad, but not the end of the world.  

FWIW: I don't think developers are slacking, but I do consider there are vast competing initiatives, and furthermore laziness is a sort of programming virtue.  Getting computers to do things with the minimum code is a very good thing.  This is a realm where Gavin is especially good.  His code is thin and tight.
legendary
Activity: 924
Merit: 1132
February 23, 2015, 07:36:47 PM

There is a course of action to prepare for the future that is not based on some form of extrapolation.  Many proposals have taken this form.  All of them have the same failing in that they are not implementable without adding some code for metrics.

What will be there for us in 2, 5, 10, 20 years that will know how big bitcoin blocks are and need to be?  The block chain will.  


Heck, I'd go with this.  Fixing it so the block size can be no bigger than 10x the median of the previous thousand blocks (plus some minimum to prevent the "sticky zero" of that simplistic formula, which might be relevant if the network ever has to restart) would do it for me.  It would make me happier than the current proposal, in fact.

But I guess it comes down to three things, for me.  First, I don't believe that the developers are clueless monkeys or lazy-asses who won't get around to doing anything, and I think characterizing them that way is kind of insulting.  Seriously, have you been reading the v0.10 source code as opposed to the 0.8.x?  A *lot* of work has been done.  This is not a bunch of lazy-asses that will keep kicking the can down the road.

Although, if they have to deal with the likes of this argument every time a fork is needed, failure to get anything done won't require lazy-asses or clueless monkeys.  It'll just require people who lose patience and quit when getting stuff done turns into an acrimonious shouting match.

Second, even though I don't believe that the 20MB + exponential growth for 20 years proposal is absolutely the best possibility, I *FEAR* this discussion getting so bogged down with "the perfect being the enemy of the good" that no block size limit change at all happens.  The way I see it we've got to do something about the transaction rate, and this is, by far, the most developed proposal with the most momentum behind it and the least technological risk, even if it there are other, probably better, things that could be done.

Third, I simply don't believe in your threat model.  Suppose a miner today tried to fill up 20MB blocks with bogus transactions.  He couldn't do it as  a mining pool operator, for a bunch of reasons to do with protocol and logistics.  First of all it would increase the bandwidth costs of the miners by about 40-fold, even when not getting blocks.  Second, the propagation delays that apply to all miners submitting bigger blocks apply to pool operators at least three times, not just once, meaning a *LOT* of missed blocks, not just a few.  So it will make them noticeably less likely to get blocks.  And third, there are other mining pools that are easy to switch to and don't have those disadvantages.  So if somebody running a mining pool tried your attack, I'd expect miners to stay away in droves.  

That leaves the hash farms as the only realistic candidates for supersizing blocks, and the hash farms have very limited time to make ROI on a major investment in rapidly-deprecating mining equipment.   If a hash farm tries your attack, they'll have to economically justify the money lost, and I don't think the numbers work.  So, even if someone is attacking, and even if for some economically insane reason they are attacking with enough power to get a whole block every day, (which NO HASH FARM can do at this time) they're only adding 20MB/day to a block chain that is currently on the order of 85MB/day.  It's annoying, but it's costing them to do it, they can't sustain it for very long, and as an attack, it is not a disaster that the nodes can't handle.

The proposal on the table is not to scale the block chain at the rate of anticipated growth in Bitcoin usage as such -- which you and for that matter I would prefer.    What it is doing is scaling according to the ability of exactly the same attack you're worrying about to create a more-than-just-annoying problem.  It's much easier (though still unreliable) to make projections about the availability of bandwidth and what amount of bandwidth will constitute a serious problem, than it is to predict the timing of Bitcoin's adoption by the mainstream or the coming and going of Bitcoin-adoption fads.   Nobody's really trying to predict the amount of Bitcoin use or the moment of its adoption by the mainstream here; they're just putting a limit on how much damage your attacker (the one I don't really believe in) can do, and the limit scales at the same rate that the resources necessary to deal with it scale.

Pages:
Jump to: