Pages:
Author

Topic: Blocks are [not] full. What's the plan? - page 3. (Read 14283 times)

legendary
Activity: 2576
Merit: 1186
December 03, 2013, 03:43:12 PM
What's your current blockmaxsize for Eligius? If the protocol limit were raised to 2,000,000 bytes, would you raise your blockmaxsize?
Eligius's blockmaxsize will likely always be the largest possible on the network (minus breathing room to avoid potential bugs).
That's not to say the blocks will always get that big - that depends on priorities, transaction fees, spam filters, etc...
legendary
Activity: 2576
Merit: 1186
December 03, 2013, 02:57:54 PM
Bitcoin transactions "go through" instantly, just like credit cards.
It's just confirmation that takes an hour or more (credit cards take 6+ months!).
legendary
Activity: 924
Merit: 1129
December 03, 2013, 02:05:23 PM
Crap.

This issue might be addressed by making sure all blocks propagate equally slowly.  I read in another thread where someone is proposing forcing all blocks to be exactly 1M in size so they propagate at the same speed, but that's a waste of bandwidth.

No matter how you slice it though, failure of transactions to go through in a timely way looks like a service failure to actual users of bitcoin.

And it matters.  If bitcoin doesn't scale smoothly up past 100 tx/second with minimal tx expenses, then it will die within weeks after that limit starts to inconvenience people.  Investors will drop it like a rock and crash the market the minute they see that this isn't a system which can replace VISA, MasterCard, and Western Union while enabling micropayments and lowering costs by at least a factor of five to compensate for basic distrust of a new system.  That's what they thought they were buying into, after all. 

So, an order of magnitude faster block propagation gets us what?  Closer to the current theoretical max of less than 8 transactions per second, and still an active deterrent for miners to add tx to blocks unless  they pay at least $0.30 in fees.  All blocks propagating at the same speed gets us what?  More orphan blocks (which in the long run doesn't hurt miners because just as many valid blocks per hour will be found after difficulty adjustments), no marginal cost to miners for including more transactions in blocks, but still limited to under 8 transactions per second. 

That's just not enough. 

The protocol needs a fundamental design change for bitcoin to continue to exist at this valuation.  The current valuation represents investor expectations that the current protocol cannot fulfill.

zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
December 03, 2013, 11:07:58 AM
What's the limit that Eligius has set? (*) How did Eligius arrive at that limit? (**) Apparently you know the answers to these questions, since you can make the bold statement that "if the limit was 2MB right now we still wouldn't have 1MB blocks."

how exactly was that a bold statement?

eligius has a limit of 1MB.  they arrived at that limit because that's the limit set in the bitcoin protocol, probably.  it sure wasn't because they were colluding with other pools

there has never been more than 900KB worth of transactions waiting that have paid the standard fee (0.0001 per 1kb), hence the lack of a 2MB block even if it wouldn't be rejected by the rest of the network
member
Activity: 118
Merit: 10
December 02, 2013, 05:01:49 PM
I would like to see some statement from the developers that actually affirms their commitment to the goal of keeping transactions costs low and transaction volume high.  At the rate we are going, we're going to be limited to only a few transactions per second, and as the competition for block space increases, bitcoin will turn into a system only for higher value transactions.

If the devs are happy to let it go in that direction, I'd like to know now so I can sell my bitcoins.  There's no future in bitcoin if we throw away all the key features.

Prediction:  By 2016, either you will be able to pay for your coffee via the blockchain with affordable fees, or bitcoins will be worth less than $1000 and investment in bitcoin ventures will dry up.
rme
hero member
Activity: 756
Merit: 504
December 02, 2013, 03:11:06 PM
Here it comes my idea about this issue:

Miners include few transactions in blocks because more transactions equals more probability of orphaned block so lets equal all blocks:

Miners should craft a block normally, so lets imagine they generate a 250KB block. Before they send it to other node they have to concatenate junk bytes (random (?)) to the block data, so all blocks are 1MB.

When a node sees this block, they broadcast it and when they finish they delete this junk bytes and they only the block.


Pros:
- All blocks "are" 1MB in terms of relaying them.
- We avoid other more tecnical mecanisms
Cons:
- Bitcoin QT needs some bandwith more because now all blocks are 1MB.


If one day we need to rise the 1MB block limit this process will be the same but all blocks will require to be 10MB (for example). We only need to concatenate junk to them.


How to perform this hard fork?
Bitcoin core developers can release an update that includes this fix but only enforcing it when the blockchain reaches the block 277000 (30 days later) so we give some time for people and miners to update their software.



donator
Activity: 1218
Merit: 1079
Gerald Davis
December 02, 2013, 03:06:08 PM
Do we even know that a block of exactly 1 megabyte would be accepted by a majority of the miners?

Yes.  That is what testnet is for.  You are running testnet to conduct your own research before "fixing" Bitcoin.

Like I said the limit will be raised.  It is just a matter of how, when, and to what level.   Still if the limit was 2MB right now we still wouldn't have 1MB blocks.  Miners are chosing to self enforce a lower limit.

Actually even that is simplistic because most miners are enforcing tx rules not a preset block size.  The intersection of current tx volume and the rules set by miners is producing blocks <250KB on average.  Going from 1MB to 2MB or 20MB isn't going to change that tx selection behavior.  Most waiting tx (if we look at seen longer than 1 block prior) are free txs.  Miners are willing to make larger blocks but not larger blocks full of free txs.
legendary
Activity: 1400
Merit: 1009
December 02, 2013, 02:43:41 PM
or there will be an anti-trust lawsuit
Good luck with that.
donator
Activity: 1218
Merit: 1079
Gerald Davis
December 02, 2013, 02:20:27 PM
If miners can make a positive return by producing larger blocks ... they will.

How? By forking the blockchain?

Eventually that might happen if the limit isn't raised. But hopefully that's not going to be required.

They want money, more money is always better.

And they'll get more money by not colluding to artificially limit supply. Otherwise there will either be a fork, or there will be an altcoin, or there will be an anti-trust lawsuit, or something will give.

(That is, assuming you're correct in the first place. It's not actually true that all miners care only about money.)

However tonight you could flip the network to 1 GB blocks and ACTUAL block sizes aren't going to change much.

They'll change though. Eligius will soon start creating larger blocks.

Nobody is making 1MB blocks.  Nobody.  Not a single block in the history of Bitcoin is 1MB.  So the "limit" isn't a limit.  Miners are choosing their own parameters which result in blocks less than 1 MB.

It would be like the fastest car in the world is 180 mph and there is too much traffic so the government decides to raise the speed limit from 500 mph to 2,000 mph think that is going to have any material change.   The limit will be raised eventually I have no doubt but right now the constraint on tx volume isn't the 1 MB limit.  That is pretty obvious when there are no 1 MB blocks.  The constraint is on the economics of mining.   Eligus for example makes some of the largest blocks on the network.  Routinely over 500 KB but they also include no free transactions.  Eligus couldn't make a 1MB block right this second even if they WANTED TO because there aren't 1MB worth of paying tx waiting for a block.  So how would raising the limit to 2MB, 5MB, 10MB change anything to that equation?

Case in point here is a recent Eligus block.
https://blockchain.info/block-index/444092/00000000000000029fd11f8e23b450749807f78ab4aa789b764cd10ea7062e59
780KB
1280 txs. 

It couldn't be any larger because it includes 100% of the tx which met Eligus inclusion requirements. 



donator
Activity: 1218
Merit: 1079
Gerald Davis
December 02, 2013, 02:16:38 PM
In the short term probably not.  The reality is the lowest cost, higher short term revenue would simply be to solo mine empty block and leave all txs even paying ones.   However miners (or pool operators) have shown some longer term thinking and HAVE built larger blocks.   However I think it does illustrate that if the min fee of 0.1 mBTC doesn't cover the orphan cost it is downright silly to expect miners to build massive blocks full of free txs and simply kill their own revenue so other people can avoid paying ~$0.10.

I would think the fact that miners are already willing to do so would be evidence that it's not downright silly.

If miners don't want to include free transactions, that's fine. But when miners collude to limit the ability of others to include free transactions, that's not fine.

Miners aren't colluding but they do heavily limit the amount of space the devote to free txs.  Some pools like Eligus make the LARGEST blocks but include zero (yes zero) free txs.   Excluding brand new tx, ones with unconfirmed outputs, and double spends/problems at any given time 90% of the memory pool is free txs.

The fact that miners haven't gone cold turkey and universally killed all free txs doesn't mean it is a reliable method of making a payment.  The free tx volume is growing and the amount of free txs in a block are declining the result of those two is the backlog everyone complains about.   It is silly to think miners are going to change.  It makes no sense for them to do so.  Many will include some free tx because it provides a release valve on the network but while blocks get bigger I wouldn't expect the amount of free tx in blocks to get bigger.  If you want timely processing include a fee it is really that simple.  If you don't then it could be days, weeks, potentially months before your tx is processed.  You are asking for charity and charity isn't always reliable.
donator
Activity: 1218
Merit: 1079
Gerald Davis
December 02, 2013, 02:05:39 PM
Difficulty adjustment already provides a mechanism to adjust a variable value with consensus. Why not just treat block size the same?
For example if the average size of the last 2016 blocks in 80% full then the block size would double.

In the last 2016 blocks, or in the 2016 blocks which make up the previous difficulty calculation? (I think the latter would probably be a better choice.)

What, if anything, is the mechanism to shrink the blocks back down again? (Halve if the average size of the last 2016 blocks is 20% full, with a hard minimum of 1 meg?)

I suspect this might be vulnerable to blockchain-forking attacks which near-simultaneously release very differently sized blocks, but it's hard to say without a full specification.

Depending on your answer to the second question, it also might increase the incentives for miners to release blocks with as few transactions as possible.

It also generally makes the design of mining software more complicated and thus more vulnerable to attack. Being able to statically allocate the size of a block is a definite advantage, though I don't know off hand how the reference implementation handles this. I'd say some hard maximum is necessary, even if it's ridiculously huge. But then what's the advantage of not just setting the maximum at whatever that hard maximum is?

In the end this might be viable, but I'd want a lot more details.

I would say the 2016 blocks which make up the previous difficulty calculation.

I don't think it should shrink, there may be periods where blocks are not fully utilised but if that became an ongoing trend it would only mean people stopped using bitcoin.

That definitely solves a lot of problems.

Except that ISN'T the problem.   If miners can make a positive return by producing larger blocks ... they will.  They want money, more money is always better.   However tonight you could flip the network to 1 GB blocks and ACTUAL block sizes aren't going to change at all.

Miners actually are including most paying txs.  Take a look at the memory pool, remove tx seen after the last block and 90%+ of the remaining tx are:
* no fee txs.
* txs with unconfirmed outputs.
* double spends or other tx problems.

Implementing child pays parent will improve the second category, the third category probably should just be excluded.  That leaves essentially free txs.   Miners aren't going to produce 20 GB blocks of free txs just because users want something for nothing.  That is never going to happen so as the correct titles says "Blocks are NOT full.  What is the plan?".


The good news is there is something which can be done to make blocks larger and reduce confirmation delays:
1) The default bitcoind has a rule where fees double as blocks get larger.  It looks like most major pools have stripped that out otherwise we wouldn't see blocks >500 KB however that rule should probably go.  It no longer really serves any purpose.  The nice thing is it is a client side change so it requires no protocol change or fork.

2) Education.  Until recently a company as big and old as MtGox was creating free txs.  For something as timely as exchange cashouts that is just stupid.  Sorry it is.  They should know better.  If you want to send your friend Bob some coins and want to be cheap that is one thing but major commercial enterprises should really be favoring reliability over trying to save pennies.

3) Child pays parent.  Currently the way bitcoind prioritizes txs it does not include paying tx with unconfirmed outputs in the next block.  So someone sends you coins with no fee, or possibly a bunch of dust spam and you try to resend them and include a fee and it looks like miners are screwing you over.   The tx selection algorithm needs to be improved so if tx B has as an input an output for tx A and tx A has no fee and is unconfirmed but tx B has a fee the miners will include BOTH A & B in the block.  This would also allow users to fix stuck txs and allow merchants to get confirmations on payments faster by respending them.

3) Improve block message format.  Currently all block messages are the same old blocks back to the genesis block and the newest found block are transmitted as header + list of txs.  For new blocks, nodes that are up to date likely already have most (and probably all) txs so a simple change to create a block header + tx hash message would reduce the bandwidth requires by ~90%.  A 1Mb "header + hash only" block message would be smaller than a 100 KB full tx block message is now.
donator
Activity: 1218
Merit: 1079
Gerald Davis
December 02, 2013, 02:02:05 PM
Oh.  From Gavin's post above, it leaves tx fees at 3.3 MilliBitcoins per Kilobyte.  For as long as the mining award is at 25BTC anyway.  Would people find that acceptable?

In the short term probably not.  The reality is the lowest cost, higher short term revenue would simply be to solo mine empty block and leave all txs even paying ones.   However miners (or pool operators) have shown some longer term thinking and HAVE built larger blocks.   However I think it does illustrate that if the min fee of 0.1 mBTC doesn't cover the orphan cost it is downright silly to expect miners to build massive blocks full of free txs and simply kill their own revenue so other people can avoid paying ~$0.10.

The longer term factors are in Bitcoins favor.  Overtime bandwidth gets cheaper it roughly follows Moore's law.  At the last mile it lags behind but for pool servers in a datacenter it is much cheaper.  You also have the block subsidy declining in half in 3 years which reduces the distortion from the "first 25 BTC are free" effect.   Combined that means even if NOTHING else changes the orphan costs should fall by a factor of 8x in the next four years. 

I already pointed out one potential solution to reducing the block broadcast size/time/latency and thus orphan cost by 90% by creating a message type which contains block header + list of tx hashes instead of block  header + list of full txs.   Combine with block subsidy decline and bandwidth improvements over time getting oprhan costs down to 0.05 mBTC or less should be possible.
legendary
Activity: 924
Merit: 1129
December 01, 2013, 02:19:10 PM
Oh.  From Gavin's post above, it leaves tx fees at 3.3 MilliBitcoins per Kilobyte.  For as long as the mining award is at 25BTC anyway.  Would people find that acceptable?
legendary
Activity: 924
Merit: 1129
December 01, 2013, 02:13:43 PM
In the end I don't see any obvious way to handle this other than increasing TX fees enough to compensate for "orphan costs;"  where does that leave tx fees? 
legendary
Activity: 924
Merit: 1129
December 01, 2013, 12:45:59 PM
Is there any way to compensate miners for creating full blocks? 

I mean, if the issue is that smaller blocks are more profitable due to less broadcast latency, shouldn't larger blocks get a premium to compensate for the loss of profit? 

Right now it looks like the 25BTC per block is built in - but if a block bigger than 750Kbytes paid 26BTC and a block less than 500Kbytes paid 24BTC (or whatever ratios turned out to be needed) .... 

Awww, crap, if we did that we'd get Sybil attacks where miners had to spam the network with  a bunch of "padding" transactions to make every block bigger than 500/750 Kbytes.  You'd have to keep the premium perfectly balanced with the size cost (including halving it when block reward went down) to keep it from being worth anybody's time to game it.  Is there a balancing mechanism like there is for difficulty?  Based on the previous 2016 blocks, is there a statistical measure we can do to determine the size cost?  And if there is, can it be done without providing a motive and method to game that?

What's the expected statistical distribution of block size given the assumption that all possible tx are included?  Can we base a premium on deviation from that statistical distribution? 


full member
Activity: 147
Merit: 100
December 01, 2013, 11:58:55 AM
Difficulty adjustment already provides a mechanism to adjust a variable value with consensus. Why not just treat block size the same?
For example if the average size of the last 2016 blocks in 80% full then the block size would double.

In the last 2016 blocks, or in the 2016 blocks which make up the previous difficulty calculation? (I think the latter would probably be a better choice.)

What, if anything, is the mechanism to shrink the blocks back down again? (Halve if the average size of the last 2016 blocks is 20% full, with a hard minimum of 1 meg?)

I suspect this might be vulnerable to blockchain-forking attacks which near-simultaneously release very differently sized blocks, but it's hard to say without a full specification.

Depending on your answer to the second question, it also might increase the incentives for miners to release blocks with as few transactions as possible.

It also generally makes the design of mining software more complicated and thus more vulnerable to attack. Being able to statically allocate the size of a block is a definite advantage, though I don't know off hand how the reference implementation handles this. I'd say some hard maximum is necessary, even if it's ridiculously huge. But then what's the advantage of not just setting the maximum at whatever that hard maximum is?

In the end this might be viable, but I'd want a lot more details.

I would say the 2016 blocks which make up the previous difficulty calculation.

I don't think it should shrink, there may be periods where blocks are not fully utilised but if that became an ongoing trend it would only mean people stopped using bitcoin.

I would say there are less risks in slowly growing the block size over time then just not having a limit at all (even if there was a large hypothetical limit). We also need to consider network propagation time. If out of the blue we had a 1 gigabyte block would all the clients globally have this data in ~10 minutes (about 6 minutes when the network hash rate grows)?


full member
Activity: 147
Merit: 100
December 01, 2013, 11:03:58 AM
Difficulty adjustment already provides a mechanism to adjust a variable value with consensus. Why not just treat block size the same?
For example if the average size of the last 2016 blocks were 80% full then the block size would double.
zvs
legendary
Activity: 1680
Merit: 1000
https://web.archive.org/web/*/nogleg.com
December 01, 2013, 04:47:40 AM
But the cost to transfer even 10000 megs in a few seconds is low compared to the cost to compute the block.

For a major pool server maybe but the cost is pretty much static so it makes super massive pools and operations far easier to ammortize that cost.

You can transfer 10 gigs out of Amazon for $1.20. Incoming transfers are free. You pay according to how much you transfer out, not as a static payment.

I must have missed something here?

First off, 10 gigs is nothing.  Most major pools are probably on unmetered bandwidth or 100TB limits.

I was sending out about 1.5TB upstream every week, just from running a bitcoind node.. that'd be (whoops) *$180 from Amazon, I guess (in a week)

sr. member
Activity: 406
Merit: 251
http://altoidnerd.com
December 01, 2013, 01:27:02 AM
Quote
The value rises because it is a superior store of value and has international transaction ability. Chinese investors highly value its international transaction ability since they can't freely exchange USD with CNY.
[...] (it is not superior, if I looked out for a safe store of value I would definitely not choose bitcoin) [...]

What would you choose instead?

There is no answer right now.
newbie
Activity: 28
Merit: 0
November 30, 2013, 04:54:39 PM
Quote
The value rises because it is a superior store of value and has international transaction ability. Chinese investors highly value its international transaction ability since they can't freely exchange USD with CNY.
It rises because it is a store of value (it is not superior, if I looked out for a safe store of value I would definitely not choose bitcoin) and because it's transaction abilities, right.  And these transaction abilities are for example, cheap and fast transactions with a simple system without intermediaries.
Everything else could be also done with computer game money (e.g.), which are not a working currency.

That's exactly my point, as long as the current system gets adapted to more and cheap transactions, everybody can use bitcoin for what they want it to use. You can use it as a good value storage, you can use it to buy your breakfast and you can use it to send money to your parents in Australia, etc. etc.
But if you start to castrate bitcoin more and more, it will lose all these abilities, in long term also the value storage function.
Pages:
Jump to: