Pages:
Author

Topic: Funding of network security with infinite block sizes - page 8. (Read 24568 times)

legendary
Activity: 1232
Merit: 1094
Idea: why not increase the hash difficulty for larger blocks? So if a miner wants to pass the 1MB threshold, require an extra zero bit on the hash. There's a big disincentive to include lots of tx dust.

Hmm, so a block with twice the difficulty can have twice the size?  In effect, this allows the block rate increase faster than once every 10 minutes, by combining multiple headers into a single header.

If you have 1MB of transactions worth 10BTC and another 1MB worth 5BTC (since they have lower tx fees), then it isn't worth combining them.  A double difficulty block would give you 50% the odds of winning, but you only win 15BTC if you win.
legendary
Activity: 1526
Merit: 1134
Yes, but what's your reasoning for that? What specific thing about using assurance contracts to fund mining with large (or floating capped) block sizes seems unworkable to you?
legendary
Activity: 1596
Merit: 1100
Infinite block sizes.
legendary
Activity: 1526
Merit: 1134
What sounds unworkable? The last post? Or the whole thread?
legendary
Activity: 1596
Merit: 1100
In general it sounds like an unworkable scheme.  There is definitely no consensus on the block size issue at all.
legendary
Activity: 2618
Merit: 1007
That sounds a bit like a dominant assurance contract with a twist. My question is why it's better/worth the extra complexity.
It would encourage betting/bidding as opposed to not betting/bidding. Either you get some small profit or you really paid for the hash rate you were looking for.

However I'm not too sure if it would work out - who would even bet on any realistic difficulty? All you need to do is bet higher than 4 times the current diff (impossible to reach) to always get a profit. If everybody does that though, nobody gets a profit at all.
Probably it would be smart to have a (exponentially? diminishing quadratically or cubic?) larger reward for people who bet close to the really reached difficulty or simply do parimutel betting...

All in all it seems to me as if you want to do some kind of "penny auction" of mining fees or network costs. As this is about money, the reward is good for anyone contributing and people donating out of charity is unlikely/unsustainable, there need to be rewards or potential rewards for anyone taking part, not only miners and as few freeloaders as possible. Also the system needs to be as automatable as possible, betting might be fun and nice for a few times, but if I had to look at hash rate diagrams and estimates every week to make sure I either get a small profit or loose the whole wager to secure the network I guess I'd quickly choose to cash out and leave this behind.

On the other hand, this could be done by an external service as well for the beginning - do parimutel betting with for example a half cut off bell shaped curve for rewards on the difficulty after the next difficulty switch, pay high bettors 10%(?) of the low bets and 90% as pure fee transactions towards each of the 2016 blocks created. Should there be blocks built too fast to include some transactions, just divide the remaining reward by the new count and the remaining blocks then get paid a little more.
Might not be as lucrative as SD but (if done charitable --> only covering hosting+maintenance costs) could be used by various services also as some kind of CSR measure - they either support the miners or make a small profit that they then can automatically reinvest for the next round of course or just directly donate to a pool, service they like, still donate to miners...
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
I very much like the idea of adjusting the difficulty with the block size.

This gives an incentive to keep the blocks small, unless there's enough fees to counteract the additional difficulty.
IE: for 2x the difficulty, I can make 5x as much profits in fees.
It can also deal more easily with fast surge, such as the week before Christmas.

This leaves a pressure to keep the size small, contrary to simply adjusting the limit after X blocks.

From there, a minimum fee required for a transaction to be relayed by the network could be a fraction the smallest fee of the newest block.
So if it was 0.00001 BTC and you try to send with a fee of 0.00000001, you are most likely to not be included in the next block and you are not relayed.

This idea seems nice, however I am afraid there will be some hidden consequences.

Can we have a comment on that from a developer ?
sr. member
Activity: 434
Merit: 250
I very much like the idea of adjusting the difficulty with the block size.

This gives an incentive to keep the blocks small, unless there's enough fees to counteract the additional difficulty.
IE: for 2x the difficulty, I can make 5x as much profits in fees.
It can also deal more easily with fast surge, such as the week before Christmas.

This leaves a pressure to keep the size small, contrary to simply adjusting the limit after X blocks.

From there, a minimum fee required for a transaction to be relayed by the network could be a fraction the smallest fee of the newest block.
So if it was 0.00001 BTC and you try to send with a fee of 0.00000001, you are most likely to not be included in the next block and you are not relayed.
hero member
Activity: 616
Merit: 500
Idea: why not increase the hash difficulty for larger blocks? So if a miner wants to pass the 1MB threshold, require an extra zero bit on the hash. There's a big disincentive to include lots of tx dust.
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Quote
Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

Did not know Gavin (or anyone) was considering floating MAX_BLOCK_SIZE, when I suggested it a while back on IRC it went down like a lead balloon.

Anyway, imo it needs to float and be based on some sensible calculation of previous block sizes over a 'long enough' period. Also I think there needs to be a way to float the min tx fee, this is the other piece that is hard-coded and adjusted by human 'seems about right' to prevent spam tx. Obviously as the value of btc goes higher then what is and isn't considered spam tx changes.

The two variables max_block_size and min_tx_fee are coupled though. Maybe a simple LQR controller for a 2 variable system could be sufficient for closing the loop for stability here?
legendary
Activity: 1526
Merit: 1134
That sounds a bit like a dominant assurance contract with a twist. My question is why it's better/worth the extra complexity.
legendary
Activity: 938
Merit: 1001
bitcoin - the aerogel of money
There would be tons of freeloaders.

What are freeloaders doing? They are betting that the hashrate will be above their desired value, even if they don't pledge. So why not bring them into the system and let them bet for profit?

If you pledge for a certain hashrate, and the hashrate doesn't materialize, you get back your pledge + x percent profit. The profit comes from the people who pledged for the hashrate that did materialize.  A fraction of their pledge goes to the miners and another fraction is used for betting.
legendary
Activity: 1050
Merit: 1002
Good points acoindr, but I am not clear on the last paragraphs. (I think you mean 100KB too).

I didn't think through the math, so the 100MB increment and one thousand block sampling size numbers are only placeholders.

I don't know how often would be best to check for possible block size increase. I don't think it should be a constant thing. Instead it could be done once per year, or maybe once every 3 months. I think knowing the fixed limit for a significant time period is helpful for planning. So that would mean maybe checking the last one thousand blocks (about 1 week's worth) every 13,400 blocks which is about every 3 months.

For size of increase, I don't know... I'm thinking 10-100MB (just a guesstimate) which may accommodate even explosive global adoption rates. Remember, this carries into a future of technological capacity we don't yet know.

Demand is indeed predictable based upon the last few thousand blocks. Because Bitcoin is a global currency transaction volumes, over a time period of a week or two, should ebb and flow steadily like the sea level.

This actually doesn't care about demand. It only cares about the network capacity comfortable for even miners of lower resources to continue participating. It guards against mining operations evolving into monopolies and oligopolies resulting from an unlimited block size (he who has the highest bandwidth/resources wins) without the automatic crippling to widespread usage a hard limit would ensure.

There may be times when network capacity available doesn't keep pace with total demand, but that simply puts market pressure on increasing network capacity and/or viable alternate channels. At least the entire project isn't wrecked because neither implementation of cap or no cap will gain consensus.

legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
Good points acoindr, but I am not clear on the last paragraphs. (I think you mean 100KB too).

Demand is indeed predictable based upon the last few thousand blocks. Because Bitcoin is a global currency transaction volumes, over a time period of a week or two, should ebb and flow steadily like the sea level.
legendary
Activity: 1050
Merit: 1002
Actually, forget my earlier "heartbeat" block size. I have a better idea.

...
All that needs to happen is allow the 1MB to be replaced by a capping algorithm which just keeps pace ahead of demand. ...

I think this is right. It's effectively not a cap at all just like the U.S. debt ceiling. The problem with the debt ceiling is people, at least prior, were not paying attention, but there is a check in place - raising the ceiling requires a vote.

Increasing block size could happen the same way, but instead of congressmen ignorant of economics and/or apathetic of votes, miners have financial incentive to vote responsibly.

I think a brilliant idea of Gavin's is this:

A hard fork won't happen unless the vast super-majority of miners support it.

E.g. from my "how to handle upgrades" gist https://gist.github.com/gavinandresen/2355445

Quote
Example: increasing MAX_BLOCK_SIZE (a 'hard' blockchain split change)

Increasing the maximum block size beyond the current 1MB per block (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) is a likely future change to accomodate more transactions per block. A new maximum block size rule might be rolled out by:

New software creates blocks with a new block.version
Allow greater-than-MAX_BLOCK_SIZE blocks if their version is the new block.version or greater and 100% of the last 1000 blocks are new blocks. (51% of the last 100 blocks if on testnet)
100% of the last 1000 blocks is a straw-man; the actual criteria would probably be different (maybe something like block.timestamp is after 1-Jan-2015 and 99% of the last 2000 blocks are new-version), since this change means the first valid greater-than-MAX_BLOCK_SIZE-block immediately kicks anybody running old software off the main block chain.


Checking for version numbers IMO is how almost all network changes should be handled - if a certain percentage isn't compliant no change happens. Doing this would have prevented the recent accidental hard fork. It's what I call an anti-fork ideology. Either we all move forward the same way or we don't change at all. That's important given the economic aspects of Bitcoin.

So we use this model also to meter block size. One of the points in the debate is future technological advances can be an accommodating factor for decentralization, but that's unfortunately unknown. No problem, let the block size increase by polling to see what miners can handle.

Think of a train many many boxcars long. Maybe the biggest most impressive boxcars are upfront near the engine powering along, but way back are small capacity cars barely staying connected. To ensure no cars are lost even the smallest car has powerful brakes that can limit the speed of the entire train.

Gavin's earlier thoughts are close:

Quote
.. (perhaps changing it to a floating limit based on a multiple of the median size of the last few hundred blocks) ...

The problem here is within a network of increasingly centralized mining capacity the median size of most any number of blocks will always be too high to account for small scale miners, allowing larger limits by default.

Instead we make it more like that train. The network checks for the lowest block limit (maybe in 100MB increments) announced by at least say 10% of all blocks every thousand blocks (or whatever). It can't be the absolute lowest value found at any given time since some people will simply not change by neglect. However, I think 10% or so sends a clear signal people are not ready to go higher. At the same time all miners have financial incentive to allow higher capacity as soon as possible due to fees they can collect.

This method would keep the block size ideal for decentralization as long as there was good decentralization of miners. So it's like the 51% attack rationale - centralized miners could only become monopolies by controlling nearly 100% of all blocks found.
sr. member
Activity: 364
Merit: 250
You meant why not enact a percentage fee, right?

The argument is that unless there is a hard block size limit, miners are incentivised to include any transaction no matter how small its fee because the cost of doing so is practically zero (less than a microdollar, according to Gavins calculations). Therefore if a bunch of transactions stack up in the memory pool that pay a smaller percentage than "normal", some miner will include them anyway because it costs nothing to do so and maximizes short term profit. Hence, you get a race to the bottom and you need some kind of hard network rule saying you can't do that. We already have one in the form of block byte size, so the debate becomes "let's keep the size limit" vs "let's remove it".

Not exactly true, very large blocks are slow to transmit and slow for others to process before relaying.  This increases the chance that a miner scores a block but it becomes orphaned by a miner with a smaller sized block.  Where this limit plays out is not known yet.  Is it 1MB blocks, 2MB, 10MB, 100MB?
legendary
Activity: 1050
Merit: 1002
That doesn't make sense to me. The formula wouldn't need to be arbitrary; it could be based on actual data.

But the formula remains arbitrary. You can't come up with an algorithm capable of measuring actual demand and actual supply, since these units are impossible to measure. ...

No, but you can have an algorithm that measures actual data, which is how the difficulty target works. You can measure what happened in the past.

If your sentiment were true the difficulty target wouldn't work.

The difficulty target aims to make one block at every 10min. But why 10min? This is an arbitrary value. It may be too much sometimes, too little at other times. It's certainly not optimal. That said, it's not such a big deal, and trying to improve it would not be worth the risks.

No, the difficulty target being set at 10 minutes is not arbitrary. It may or may not be optimal, but it's not arbitrary. If that value could be set arbitrarily then it could be two weeks, or two years, which of course is not workable for the application.

Speaking of all this it occurs to me we could have a dynamic cap provide both a limit and non-limit for block size. That may be a workable way to satisfy both camps.

I once had a friend download a movie using BitTorrent  Roll Eyes and noticed the download speed varied from an absolute trickle to a full flood of throughput. Like a race car on a freeway speeds alternated between open and constricted. I'm pretty sure that's done so "leechers" don't drain "seeder" resources too much as would naturally happen if transfer channels were left unchecked.

Bitcoin could work the same way. Form a mental picture of the block size beating slow like a heart. At times the block size could be constricted, allowing small players equal chance to participate meaningfully. However, that constriction could also be released to allow unlimited throughput.

In the real world that would translate to inconvenience only if and when you needed a transaction with Bitcoin's desirable features (anonymity, irreversibility, etc.) at a time of block size constriction and weren't willing to bid a high enough fee for priority inclusion. You might instead opt for an alternate cryptocurrency or suitable off-chain transaction option. This seems a small price to pay if it makes Bitcoin workable at a global scale.
legendary
Activity: 2940
Merit: 1090
If there is no cap, there is no amount of resources you can buy and set up and run that will be enough unless you are the top spender, or possibly one of the cartel of top spenders.

A cap means we can know how many millions of dollars each node is going to cost, so that we can start figuring out how many such nodes the world, or a given nation, or a given demographic, or a given multinational corporation, or a given corner store, or a given mid-sized business, or a given chain of grocery stores, etc etc can afford to set up and run.

No cap means those things basically cannot be known, so trying to build a node becomes a massive hole in the budget that you keep throwing money at but never maybe manage to throw as much money at it as sprint and bell and google and yahoo and virgin so end up having thrown away all your money for nothing.

So we have to know, are nodes something only the fortune 25 should be able to afford? Or something even the entire fortune 500 could afford? Could any of the fortune 1000 that are not in the fortune 500 afford it if they happen to be very highly aligned to and optimised for that particular kind of business? Or should they be able to afford it even if it is not really strongly aligned with their existing infrastructure and business?

Those are the kinds of things we need to know, that a lack of cap on block size makes unknowable.

-MarkM-
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
SD is not flooding anything. They're not attacking the network, Bitcoin users want to use their services.
Of all business, they're likely the one that has mostly contributed to miners via transaction fees.

This is the Circe-like character of SD. It looks attractive but carries great dangers. I am still concerned that this type of transaction source can scale far faster than the Bitcoin network.

Miners have no interest in keeping a "monster block". And they can easily choose not to build on top of such block, unless it is N blocks deep already, what would likely get the monster block rejected by the network.

Consider variance. One hallmark of any successful, complex system is low variance of important intrinsic parameters. The Earth's ecosystem depends upon low variance in climate: e.g. the difference in air pressure between a cyclone and anticyclone is not a large percentage of 1 atmosphere.
In the case of Bitcoin, a very small block followed by a very large one is an unhealthy sign. A cap will help keep the variance (standard of deviation) of block size lower. This must be helpful to all miners as they know what to expect and plan accordingly, making incremental changes, which are always safer. A cap helps ensure all miners are on the same page about what is considered an expected block or a oversized one.

legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
Interesting discussion.
Pages:
Jump to: