Pages:
Author

Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... - page 18. (Read 105082 times)

legendary
Activity: 1400
Merit: 1013
What you describe is less than rational limit in 2.

First you start out by grossly misusing economic terms.

Then, when this is pointed out, you reply by making assumptions about theological price levels which you can't possibly calculate because they are not calculable.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
You must be able to broadcast that huge block to most of the nodes in 10 minutes. I don't see the latest research regarding this area, but there is a paper from 2013


http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf

Based on this research, it took 0.25 seconds for each KB transaction to reach 90% of network. In another word, a 1MB block will take 256 seconds to broadcast to majority of nodes and that is 4 minutes

When block size reach 10MB, you will have a broadcast time of 40 minutes, means before your block reach the far end of the network, those nodes have already digged out 3 extra blocks thus your block is always orphaned by them. And the whole network will have disagreement about which segment have the longest chain, thus fork into different chains

Gavin's proposal is to let mining pools and farms connect to high speed nodes on internet backbone. That is reasonable, since the propagation time is only meaningful for miners, your transaction will be picked up by the mining nodes closest to you and if those mining nodes have enough bandwidth, they can keep up with the speed. But anyway, how much bandwidth is really needed to broadcast 10MB message in a couple of minutes between hundreds of high speed nodes need to be tested. And this is the risk that someone worried about the centralization of mining nodes: Only those who have ultra high speed internet connection can act as nodes (I'm afraid that chinese farms will be dropped out since their connection to the outside world is extremely slow, they will just fork to their own chain inside mainland china)

I don't know how you come to those assumptions based on that research.

Quote
the block message may be very large — up to 500kB at the time of writing.

Quote
The median time until a node receives a block is 6.5 seconds whereas the mean is at 12.6 seconds.

Quote
For blocks, whose size is larger than 20kB, each kilobyte in size costs an additional 80ms delay until a majority knows about the block.

The do not mention the average size of blocks they measured. Let's assume all their blocks were 0KB. 12.6 seconds for that. Add 80 ms per addicition KB.... 80ms * 1024 * 20 is about 27.3 minutes. Add the original 12.6 seconds... Roughly 28 minutes for 20MB.

Of course, 28 minutes is still long. That is based on 2013 data. I assume the nodes now will have improved their verification speed and have more bandwidth. New measurements could / should be made to verify that propagation speed will not become an issue.

I just took the numbers on that chart, their paper says 0.08s/KB but the chart shows 0.25s/KB, no big difference

Ideally, you would like to keep the broadcasting time below 1 minute, to make sure the network does not fork into different chains, and to reduce the orphaned blocks. Currently some connection to china is about 20KB/second, means 1MB data will take 1 minute to just reach their network. Of course china have much less nodes than the rest of the world, but they do have large amount of hashing power

For 10MB block, the bandwidth requirement will be 2Mb, which is quite high if you consider the connection over continent. Hopefully before we reach that stage the network bandwidth has been upgraded

hero member
Activity: 836
Merit: 1030
bits of proof
1) the space is scarce
Misinformation.

Space in a block is always scarce, regardless of whether or not there's a protocol limit.

The only to make space in a block non-scarce is to invent a way of transmitting data that requires zero energy, zero time, and exceeds the shannon limit.

Whether or not space in a block is scare depends on physics, not on software design.

Not.

What you describe is less than rational limit in 2.
legendary
Activity: 1400
Merit: 1013
1) the space is scarce
Misinformation.

Space in a block is always scarce, regardless of whether or not there's a protocol limit.

The only to make space in a block non-scarce is to invent a way of transmitting data that requires zero energy, zero time, and exceeds the shannon limit.

Whether or not space in a block is scare depends on physics, not on software design.
hero member
Activity: 836
Merit: 1030
bits of proof
Because the block limit is not creating a fees market. The block reward is too high and masks the theoretical "bidding" process for block space

You are right there is no fees market yet, but not for the reason you state.
The block limit is not currently creating a market because it is not yet tight and bigger blocks will not be tight for yet more time.

Miners' only freedom is the freedom to exclude transactions. Can it be used to press users to pay higher fees?
Only if either

1) the space is scarce
2) supported by a rational lower limit to fee, no sane miner crosses
3) played in cartel

You want to eliminate 1) hoping that 2) holds so we do not fall through to 3)

Gaving quantified a rational lower limit to transaction fee as 0.0032/KB. A miner that includes a transaction paying less is no longer compensated for his increased orhpan cost. Miners currently accept even less, that tells miner are either dumb or the rational limit is lower. I suspect people deploying hundreds of millions in equipment would not act dumb for a prolonged time period.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
(2) How the "don't-increase-the-blocksize-limit-because-centralization" argument is misguided.  You clearly showed that not increasing the blocksize limit could lead to greater centralization by pricing out individuals from trustless access to the blockchain.  

He didn't as he used a "straw-man" argument that people would use centralised authorities when they could just as easily not (again I am not against raising the 1MB limit but using the wrong arguments for that is simply not persuasive).
legendary
Activity: 1162
Merit: 1007
Fantastic work, DeathAndTaxes!

Two things really stuck out for me after reading your post:

(1) How ridiculously low the 1 MB limit is if we envision any sort of "success case" for bitcoin. 

(2) How the "don't-increase-the-blocksize-limit-because-centralization" argument is misguided.  You clearly showed that not increasing the blocksize limit could lead to greater centralization by pricing out individuals from trustless access to the blockchain.   
legendary
Activity: 1400
Merit: 1013
Conclusion
The blockchain permanently restricted to 1MB is great if you are a major bank looking to co-opt the network for a next generation limited trust settlement network between major banks, financial service providers, and payment processors.   It is a horrible idea if you even want to keep open the possibility that individuals will be able to participate in that network without using a trusted third party as an intermediary.
I agree 100%.

There will probably be a role for settlement networks in the future, but even if they do exist those settlement networks should not be artificially subsidized by a block size limit.
legendary
Activity: 1232
Merit: 1001
mining is so 2012-2013
Sometimes I debate D&T on some issues, but not this time.  Great post. 
legendary
Activity: 1162
Merit: 1007
In response to those claiming that a hard fork to increase the blocksize limit will hurt the miners' ability to collect fee revenue:

The empirical data we have so far does not support the notion that the miners will be starved of fees or that blocks will be full of low fee transactions if the blocksize limit is increased.  If we inspect the fees paid to miners per day in US dollars over the lifetime of the network (avg blocksize << max blocksize), we see that total fee revenue, on average, has grown with increases in the daily transaction volume.



The total daily fees, F, have actually grown as the number of transactions, N, raised to the power of 2.7.  Although I don't expect this F~N2.7 relationship to hold forever, those suggesting that the total fees would actually decrease with increasing N have little data to support this claim (although, during our present bear market we've seen a reduction in the daily fees paid to miners despite an increase in N.)

Past behaviour is no guarantee of future behaviour, but historically blocks don't get filled with low-fee transactions and historically the total fees paid to miners increases with increased transaction volume.    
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Turing-completeness is a non-starter.

The Bitcoin scripts were deliberately designed to not be Turing complete.

A statement with nothing to back it up (the reason Bitcoin is designed that way is because it was designed that way).
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
That's taken right from the FAQ of the project itself. This is just 1 example of potential services that won't be able to get developed completely (if at all).

They just haven't designed it right (so the Bitcoin limits are not really an issue).

Although let's not go off-topic my AT project will be launching "crowdfund" very soon and that could actually work on Bitcoin if Bitcoin were to adopt AT (with no limits for the number of pledges).

Turing-completeness is a non-starter.

The Bitcoin scripts were deliberately designed to not be Turing complete.

I have to admit, the 28 minute (thorough) block propagation time (for 20MB block) mentioned earlier does seem realistic to me. That works out to about 14 hops with each node spending 2 minutes on block verification. Edit: Forgot to add: Turing completeness, even with cycle limits, will obviously increase block verification time.

This lends credence to LaudaM's contention that the blocks will not fill up immediately. Though, I have seen it argued that the large miners will have an incentive to push smaller miners out with large blocks (filled with garbage if need be).

Let me go with his assumptions implying a rational minimum fee of 0.0008 BTC for a 250 Byte transaction.

That would imply a total fee of 3.2 for 1 MB  blocks. We have not even seen that magnitude yet (exceptions were only fucked up transactions).
Means we are not even close to block size limit sqeezing out meaningful fees, so why increase it?

Because a change like this has to be planned months in advance. During the next bubble it will be too late to meaningfully accommodate the increased transaction volume. I suppose that may be the point: to temper speculation with high fees. The difficulty is that the institutions pushing the next bubble have the ability to print their own money.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
https://gist.github.com/gavinandresen/5044482

I am not sure the numbers are sound (I think the underlying study that Gavin relied on has some pretty worst case assumptions) but even assuming they are 10x actual cost the idea that there is no cost is simply not supported.   A 1 satoshi fee (or even 100 satoshi fee) for all intents and purposes is a no fee transaction.

Let me go with his assumptions implying a rational minimum fee of 0.0008 BTC for a 250 Byte transaction.

That would imply a total fee of 3.2 for 1 MB  blocks. We have not even seen that magnitude yet (exceptions were only fucked up transactions).
Means we are not even close to block size limit sqeezing out meaningful fees, so why increase it?

Because the block limit is not creating a fees market. The block reward is too high and masks the theoretical "bidding" process for block space, leaving aside the fact that wallet software does not offer users a way to update a fee on a transaction until it has been dropped from all the mempools.

However 20MB is large enough for a proper fees market to develop. Currently, blocks >900KB are carrying 0.25 BTC in fees. ~5BTC in fees for a full 20MB block and a block reward down to 6.25, means the fees and reward are similar, and the fees market should be healthy.
hero member
Activity: 836
Merit: 1030
bits of proof
https://gist.github.com/gavinandresen/5044482

I am not sure the numbers are sound (I think the underlying study that Gavin relied on has some pretty worst case assumptions) but even assuming they are 10x actual cost the idea that there is no cost is simply not supported.   A 1 satoshi fee (or even 100 satoshi fee) for all intents and purposes is a no fee transaction.

Let me go with his assumptions implying a rational minimum fee of 0.0008 BTC for a 250 Byte transaction.

That would imply a total fee of 3.2 for 1 MB  blocks. We have not even seen that magnitude yet (exceptions were only fucked up transactions).
Means we are not even close to block size limit sqeezing out meaningful fees, so why increase it?
hero member
Activity: 742
Merit: 500
Quote
Allowing for that rational line being above zero, the question is if that rational limit pays for the security we need to sustain. See my previous calc. on transaction fees needed.

https://gist.github.com/gavinandresen/5044482

I am not sure the numbers are completely sound so I am not saying rely on them like gospel but more as a thought exercise.  I think the underlying study that Gavin relied on has some pretty worst case assumptions backed in and the average miner is probably going to be better connected than the average non-miner.  Still even assuming they estimate is 10x actual cost the idea that there is no cost is simply not supported.   A 1 satoshi fee (or even 100 satoshi fee) for all intents and purposes is a no fee transaction.

As for fees making up the difference of the subsidy cut ... they won't.  However at the current time the network is probably overprotected relative to the actual economic value of the transactions occurring on it.  Subsidies tend to do that in any market.   So over the next five years the difference caused by the two halvings will be compensated by a combination of a) some reduction in overall security, b)the rise in the exchange rate as miners costs are mostly in fiat terms, and c)rise in overall block fees.

the problem is: on gavincoin low fees will lead very fast to reaching the blocklimit again.
If the space is there and it's cheap it will be filled with one crap or another.
So reaching the 20MB limit will likely occure way sooner than most think in case fees are too low.

Gavincoin is basically a proposal for socialist central planning (first on fees and later on moneysupply)
donator
Activity: 1218
Merit: 1079
Gerald Davis
Quote
Allowing for that rational line being above zero, the question is if that rational limit pays for the security we need to sustain. See my previous calc. on transaction fees needed.

https://gist.github.com/gavinandresen/5044482

I am not sure the numbers are completely sound so I am not saying rely on them like gospel but more as a thought exercise.  I think the underlying study that Gavin relied on has some pretty worst case assumptions backed in and the average miner is probably going to be better connected than the average non-miner.  Still even assuming they estimate is 10x actual cost the idea that there is no cost is simply not supported.   A 1 satoshi fee (or even 100 satoshi fee) for all intents and purposes is a no fee transaction.

As for fees making up the difference of the subsidy cut ... they won't.  However at the current time the network is probably overprotected relative to the actual economic value of the transactions occurring on it.  Subsidies tend to do that in any market.   So over the next five years the difference caused by the two halvings will be compensated by a combination of a) some reduction in overall security, b)the rise in the exchange rate as miners costs are mostly in fiat terms, and c)rise in overall block fees.
member
Activity: 63
Merit: 10

Why not implement maximum block size alongside mining difficulty adjustment using the same mechanism?

Rather than an arbitrary 20MB limit or rolling quadruple/exponential maximum size increases, why not incorporate a self-adjusting maximum block sized based on the number of the last n blocks solved that hit the existing hard limit?
hero member
Activity: 836
Merit: 1030
bits of proof
The cost of orphaned blocks is very real.

I stated the same. We might disagree on the magnitude.

Since there is no marginal cost in including a transaction to the current block,

let me be more precise:
There is a marginal cost implied by block propagation speed being proportional to size and propagation being proportional to orphan rate. There is also a computation cost of updating the merkle tree and updating miner with it. These marginal costs are today however magnitudes below the lowest non-zero fees paid.

A rational miner will draw the minimum fee policy just above that line.

Allowing for that rational line being above zero, the question is if that rational limit pays for the security we need to sustain. See my previous calc. on transaction fees needed.
donator
Activity: 1218
Merit: 1079
Gerald Davis
I don't think he's saying that miners won't mine at all, just that they won't mine no-fee transactions.

A rational miner mines all non-zero fee transactions he sees until the block is full, since it has virtually no cost to include one. If he would not include any, he would leave it on the table for an other miner.

Imposing lower limit to fee could only be effective if miner build a cartel for that, and this is not the way of regulation I favor.

You keep saying this but the cost is not zero.  The cost of orphaned blocks is very real.   If you increase propagation time by six seconds then the probability that your block will be orphaned increased by 1%.  Those transactions have to be paying more than the estimated loss due to increased propagation delay or the miner takes a net loss.  As margins squeeze and the subsidy declines, miners that are bad at math will quickly become bankrupt miners that will be replaced by miners less bad at math.

Another way to look at it is if you double the size of a block you double the chance of your block being orphaned however since miners include highest fee transactions first doubling the size of the block does not double your gross revenue.  At some point there is that marginal transaction where despite it having a fee it will result in a net loss to include it.  A rational miner will draw the minimum fee policy just above that line.
legendary
Activity: 1274
Merit: 1004
I don't think he's saying that miners won't mine at all, just that they won't mine no-fee transactions.

A rational miner mines all non-zero fee transactions he sees until the block is full, since it has virtually no cost to include one. If he would not include any, he would leave it on the table for an other miner.

Imposing lower limit to fee could only be effective if miner build a cartel for that, and this is not the way of regulation I favor.
Virtually no cost to include one, but not no cost. Building a 20MB block full of 1 satoshi fee transactions would not make economic sense as the increased risk of an orphan would outweigh the benefit of the less than 1mBTC in fees.
Pages:
Jump to: