Pages:
Author

Topic: Blocks are [not] full. What's the plan? (Read 14343 times)

sr. member
Activity: 406
Merit: 251
http://altoidnerd.com
March 16, 2014, 06:54:23 PM
Sure. About the hardest part would be coming up with the name for this new altcoin.

How about Bitcoin II ?
full member
Activity: 196
Merit: 100
February 18, 2014, 08:36:20 AM
Would it be possible to replace the MaxBlockSize by a MinBlockSize in the protocol,

that would adapt, depending on queue size

Sure. About the hardest part would be coming up with the name for this new altcoin.
full member
Activity: 196
Merit: 100
February 16, 2014, 01:39:34 PM
I think that because the problem is mainly about orphan risk, it needs to be addressed specifically via orphan risk.

consider https://en.bitcoin.it/wiki/Proof_of_blockchain_fair_sharing as a basis for a solution.

The premise is that the longer a transaction has been in existence, the more important it becomes to the acceptance of the next block.

Hmm... What happens when someone tries to split the network by flooding different sets of transactions to different parts of the network?

For that matter, what happens with nonstandard transactions?

This creates an orphan risk for *failing* to include transactions (specifically the oldest outstanding transactions) that balances the orphan risk for *including* transactions (and using up block space to do it).

Hmm... Along those lines, would we possibly see ourselves in a situation where miners pay for high value transactions?

That part would be nice. But then, it'd only increase the means by which someone can buy up...the private keys to old coins, for example.
legendary
Activity: 924
Merit: 1132
February 16, 2014, 11:48:33 AM
I think that because the problem is mainly about orphan risk, it needs to be addressed specifically via orphan risk.

consider https://en.bitcoin.it/wiki/Proof_of_blockchain_fair_sharing as a basis for a solution.

The premise is that the longer a transaction has been in existence, the more important it becomes to the acceptance of the next block.

Although the nodes may not all be looking at the exact same transaction list, if the numbers seen are similar, they should be computing a very similar 'credibility score' for the next block.  And if the 'credibility score' is too low, they ignore that block, mining on the previous block instead. 

This creates an orphan risk for *failing* to include transactions (specifically the oldest outstanding transactions) that balances the orphan risk for *including* transactions (and using up block space to do it). 

BDD = Number of bitcoin days destroyed
BD1 = Number of bitcoin days in fee-paid, nonconflicting tx 20 minutes or more old and not yet included in a block
BD2 = Number of bitcoin days in fee-paid, nonconflicting  tx 40 minutes old and not yet included in a block
BD3 = Number of bitcoin days in fee-paid, nonconflicting tx 60 minutes old and not yet included in a block
BD4 = Number of bitcoin days in fee-paid, nonconflicting tx 80 minutes old and not yet included in a block
etc.

So let
Credibility = BDD/ (BDD+ BD1 + 2^(BD2 + 2^(BD3 + 2^(BD4 + ... ))))

And if we find a chain is too 'incredible' (less than 0.000001 or so) we just ignore it and mine on a more credible chain (even if that  more credible chain is just the current 'incredible' chain minus the last block).  Because miners are looking at different tx pools, or may have learned about the same tx at different times, they may calculate different 'credibility' thus that some accept a new block and some don't.  But this shouldn't matter much because if 40% of the miners reject a block then that block gets a 40% orphan risk, which has a tangible cost to the miner producing it and gives the miners a strong motivation to avoid creating the kind of blocks for which that might be an issue.

This appropriately prioritizes fee-paid nonconflicting transactions that have been waiting for the longest time, and has the benefit of allowing the blockchain to resist a more-than-51% attack at least in terms of making sure that no one can keep a valid but unfavored tx *out* of the blockchain forever.

Does it cause a problem that initially rejected blocks can get brought into the chain via a reorg, when they become part of a chain that is, thanks to a later block that includes the tx it missed, more 'credible?' 






full member
Activity: 196
Merit: 100
February 10, 2014, 08:24:51 AM
The block subsidy is not going to halve as quickly as the bitcoin price will double (that is, if bitcoin is successfull).  Fees will shrink in BTC terms as the bitcoin price rises (because the supply/demand for blockspace will be priced in accordance with its real value).  Unfortunately this means that the orphan cost is likely to get higher and higher compared to transaction fees.  IMO it could take many decades before a reasonable fee will outweigh the orphan cost.

We could use some numbers on exactly what the orphan cost is. Make sure to take into account that only 4 miners account for more than 75% of the hashing power. Also make sure to consider the benefits that netsplits have on mining income (lower difficulties).

As long as you have a fast connection to GHash.IO, your orphan cost should be little or nothing. Within the next few decades connection speeds should get even faster, and most likely mining operations will move closer to each other network-wise. But most importantly, within the next few decades the need to transfer the entire block after finding a solution should be eliminated - certainly for the larger miners, anyway.

Instead of mining pools, within the next few decades it's likely we'll see propagation pools. The mining will be kept separate, but the generation of the parts of the block other than the coinbase transaction will be pooled. Thus there won't be anything to propagate other than the coinbase transaction and the block header. The rest of the block gets propagated as the transactions come in. And only the previous block hash has to get propagated all the way to the actual physical mining equipment. The equipment which verifies the block can be kept at a location with a really fast Internet connection, while the ASICs can sit in some remote cabin in the tundra or whatever (they'll need a connection with reasonably low latency, but don't need much bandwidth).

All of this without any need for a hard fork, too.

Hard forks should be an absolute last resort.
member
Activity: 118
Merit: 10
February 10, 2014, 03:34:55 AM
The block subsidy is not going to halve as quickly as the bitcoin price will double (that is, if bitcoin is successfull).  Fees will shrink in BTC terms as the bitcoin price rises (because the supply/demand for blockspace will be priced in accordance with its real value).  Unfortunately this means that the orphan cost is likely to get higher and higher compared to transaction fees.  IMO it could take many decades before a reasonable fee will outweigh the orphan cost.
legendary
Activity: 1792
Merit: 1111
February 09, 2014, 09:43:46 PM
What's wrong with the minimum block size idea?  If dummy data doesn't work then make it require real transactions.

Miners will only fill blocks with dummy transactions if they don't have enough available transactions in their mempool.  Otherwise they'll fill it with transactions that earn them fees.  If we can expect a certain transaction rate on the bitcoin network then the min block size could be set accordingly and so we shouldn't see much use of dummy transactions.

No, it won't work. Miners will simply create an extra output in the reward transaction, sending 0 bitcoin to OP_TRUE. They will then use it as an input for another OP_TRUE output, and repeat this process. Since the process is totally deterministic, miners won't need to broadcast these dummy transactions. Other miners will recreate the full block locally. So we go back to the current system: with less (real) transactions, the orphan rate is lower.

EDIT: The workaround suggest above does not work due to the 100-block maturity rule. A trivial fix is to put the dummy transaction as the second transaction in a block, and the rest are derived in the deterministic way.
legendary
Activity: 1792
Merit: 1111
February 09, 2014, 09:40:21 PM
What if we required a minimum number of transactions

Don't you realize that my suggested workaround will also maximize the number of transactions?
donator
Activity: 1218
Merit: 1079
Gerald Davis
February 09, 2014, 07:55:47 PM
So what happens when there aren't enough tx to fill the min requirement?  The network just halts until there is enough?  Of course all this is academic.  You do understand that Bitcoin works on a consensus system and there will never be a consensus to change a core element of Bitcoin (baring possibly a change in crypto due to a cryptographic break). 

Also as the block subsidy declines and networks get faster and cheaper there is less of an artificial subsidy in the network.  Miners which exclude paying tx will simply go bankrupt.  If a orphan cost is less than the tx fee there is no economical reason to not include a given tx.  The miner simply gets less revenue for the same amount of work.  Low barriers to entry will push margin on miners so low that leaving profit on the table will mean operating at a negative margin (mining to lose money).

Up thread I already proposed a non-core change that would cut orphan costs by 90% or more.  That combined with the block subsidy decline will make this a non-issue.
newbie
Activity: 2
Merit: 0
February 09, 2014, 06:45:02 PM
What's wrong with the minimum block size idea?  If dummy data doesn't work then make it require real transactions.

Miners will only fill blocks with dummy transactions if they don't have enough available transactions in their mempool.  Otherwise they'll fill it with transactions that earn them fees.  If we can expect a certain transaction rate on the bitcoin network then the min block size could be set accordingly and so we shouldn't see much use of dummy transactions.

No, it won't work. Miners will simply create an extra output in the reward transaction, sending 0 bitcoin to OP_TRUE. They will then use it as an input for another OP_TRUE output, and repeat this process. Since the process is totally deterministic, miners won't need to broadcast these dummy transactions. Other miners will recreate the full block locally. So we go back to the current system: with less (real) transactions, the orphan rate is lower.

Excellent insight.  What if we required a minimum number of transactions, rather than a minimum block size?    There wouldn't be a way to work around that.   As an added benefit, fast blocks would tend to clean up low priority transactions.

It seems like we need to be looking at EVERYTHING possible to increase the scalability of the system, and this would fix one dimension.  

The new minimum transaction requirement could be eased in over time...starting at say 1/5 the median number of transactions.  Once the code was there, it would be easier to upgrade to 1/3 or even 1/2 in the future.  

There is no value to the network in allowing near-empty blocks, and huge potential drawbacks.
full member
Activity: 196
Merit: 100
February 09, 2014, 10:21:56 AM
What's wrong with the minimum block size idea?  If dummy data doesn't work then make it require real transactions.

Miners will only fill blocks with dummy transactions if they don't have enough available transactions in their mempool.  Otherwise they'll fill it with transactions that earn them fees.  If we can expect a certain transaction rate on the bitcoin network then the min block size could be set accordingly and so we shouldn't see much use of dummy transactions.

No, it won't work. Miners will simply create an extra output in the reward transaction, sending 0 bitcoin to OP_TRUE. They will then use it as an input for another OP_TRUE output, and repeat this process. Since the process is totally deterministic, miners won't need to broadcast these dummy transactions. Other miners will recreate the full block locally. So we go back to the current system: with less (real) transactions, the orphan rate is lower.

Seems like quite a lot of software engineering work with relatively little benefit. If you really want to get together with other miners to speed up transaction propagation there are much better solutions. If you know the identity of the miner you can always start working on the new block as soon as you see the header, and check the validity of it a few seconds later when you receive the whole block. If the block winds up being invalid then you ignore those few seconds of work, and punish the miner who sent you the bad block. Alternatively, a trusted third party can verify blocks and sign the headers. Miners could send blocks to this third party ahead of time, so at the time the block is actually found very little needs to be transferred.

That said, introducing a minimum block size seems to me to be a drastic change, and I haven't seen enough evidence of a drastic problem to implement it. I think the focus at this point should be on raising the maximum block size and speeding up propagation.
legendary
Activity: 1792
Merit: 1111
February 09, 2014, 03:29:17 AM
What's wrong with the minimum block size idea?  If dummy data doesn't work then make it require real transactions.

Miners will only fill blocks with dummy transactions if they don't have enough available transactions in their mempool.  Otherwise they'll fill it with transactions that earn them fees.  If we can expect a certain transaction rate on the bitcoin network then the min block size could be set accordingly and so we shouldn't see much use of dummy transactions.

No, it won't work. Miners will simply create an extra output in the reward transaction, sending 0 bitcoin to OP_TRUE. They will then use it as an input for another OP_TRUE output, and repeat this process. Since the process is totally deterministic, miners won't need to broadcast these dummy transactions. Other miners will recreate the full block locally. So we go back to the current system: with less (real) transactions, the orphan rate is lower.
member
Activity: 118
Merit: 10
February 09, 2014, 02:14:43 AM
What's wrong with the minimum block size idea?  If dummy data doesn't work then make it require real transactions.

Miners will only fill blocks with dummy transactions if they don't have enough available transactions in their mempool.  Otherwise they'll fill it with transactions that earn them fees.  If we can expect a certain transaction rate on the bitcoin network then the min block size could be set accordingly and so we shouldn't see much use of dummy transactions.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
February 09, 2014, 12:14:22 AM
I think a cap is still useful.  [...] I am much more in favor of raising the cap possibly to 10 MB (or even 5MB) to buy us the community the time to fully analyze all the possible implications of future block sizes (no cap, floating cap, high fixed cap, etc). 

I still like this...

If unlimited size is too worrying then the simplest safety net would be max block size = 10x median block size during previous 2016 blocks or previous difficulty period.
donator
Activity: 1218
Merit: 1079
Gerald Davis
February 08, 2014, 11:35:40 PM
Agreed.  Hopefully pools and major solo operators can see that their long term profitability is based MORE on the growth of Bitcoin than the short term orphan cost.   Hopefully large pools are aware they don't need universal support.  If 70% of hashpower agrees to produce larger blocks (say 500KB avg) for the good of the network.  It cuts the orphan cost by 70%.   Still I think the orphan cost does highlight the reality that miners are going to be increasingly reluctant to devote more space to free tx.  This is something users will have to come to grips with.   Including a fee and waiting hours or days is unacceptable (although it is often for non-fee reasons) however no fee tx should be considered charity by users and if someone does you an act of charity in an hour, or day, or week well you got what you paid for.  The default behavior of most clients should probably be changed to include the "min fee" on ALL txs not just low priority ones.  If users want to they could change this but they should be warned "Including no tx fee may result in delayed confirmation times".  Enforcement for relaying at node level should still only be on low priority.  

It is somewhat ironic that this is more of an issue due to the higher exchange rate.  The min fee on low priority tx was lowered due to rising exchange rate.  Today 3.3 mBTC is ~$1.50.  Ouch.  However if Bitcoins were worth less it would be less of a cost.  Since Bitcoin is often used as a proxy for USD a 5 mBTC fee (which more than covers orphan costs) is more viable at a lower exchange rate.
I love this thread.  Only 1 year ago the likes of Mister Bigg were going nuts saying miners would create humungous blocks full of 1-satoshi paying transactions, because there was no reason not to.  Therefore block size shouldn't be lifted.

Now people are moaning block size isn't increasing fast enough.

There's no better evidence that a block size limit is no longer necessary - no-one apart from the odd Eligius block is even approaching 1MB.  Miners are forgoing the fees.  Just scrap the cap.  No-one is forcing miners to build on some 4MB block full of spam transactions; they can always ignore it and teach the spammer a lesson.  Free markets will sort it out.

And with 1 BTC worth $800 or more, screwing around is no longer cheap.

I think a cap is still useful.  The cost to massively bloat the blockchain remains a viable threat, and at a tiny fraction of what it would cost to 51% the network.  Bitcoin has a somewhat unique cost vs benefit scenario.  There is actually little direct cost to put a tx in a block however the true cost is the total storage, bandwidth, and memory requirements of all the full nodes combined and most nodes are not miners.

While miners who are economically motivated are unlikely to do something stupid (like create a 1 GB block full of millions of txs with a 1 satoshi fee), an attack may not be economically motivated and at this point dumping hundreds of thousands of GB of additional blockchain size could slow adoption.

That being said I am much more in favor of raising the cap possibly to 10 MB (or even 5MB) to buy us the community the time to fully analyze all the possible implications of future block sizes (no cap, floating cap, high fixed cap, etc).  
donator
Activity: 668
Merit: 500
February 08, 2014, 11:11:46 PM
Agreed.  Hopefully pools and major solo operators can see that their long term profitability is based MORE on the growth of Bitcoin than the short term orphan cost.   Hopefully large pools are aware they don't need universal support.  If 70% of hashpower agrees to produce larger blocks (say 500KB avg) for the good of the network.  It cuts the orphan cost by 70%.   Still I think the orphan cost does highlight the reality that miners are going to be increasingly reluctant to devote more space to free tx.  This is something users will have to come to grips with.   Including a fee and waiting hours or days is unacceptable (although it is often for non-fee reasons) however no fee tx should be considered charity by users and if someone does you an act of charity in an hour, or day, or week well you got what you paid for.  The default behavior of most clients should probably be changed to include the "min fee" on ALL txs not just low priority ones.  If users want to they could change this but they should be warned "Including no tx fee may result in delayed confirmation times".  Enforcement for relaying at node level should still only be on low priority. 

It is somewhat ironic that this is more of an issue due to the higher exchange rate.  The min fee on low priority tx was lowered due to rising exchange rate.  Today 3.3 mBTC is ~$1.50.  Ouch.  However if Bitcoins were worth less it would be less of a cost.  Since Bitcoin is often used as a proxy for USD a 5 mBTC fee (which more than covers orphan costs) is more viable at a lower exchange rate.
I love this thread.  Only 1 year ago the likes of Mister Bigg were going nuts saying miners would create humungous blocks full of 1-satoshi paying transactions, because there was no reason not to.  Therefore block size shouldn't be lifted.

Now people are moaning block size isn't increasing fast enough.

There's no better evidence that a block size limit is no longer necessary - no-one apart from the odd Eligius block is even approaching 1MB.  Miners are forgoing the fees.  Just scrap the cap.  No-one is forcing miners to build on some 4MB block full of spam transactions; they can always ignore it and teach the spammer a lesson.  Free markets will sort it out.

And with 1 BTC worth $800 or more, screwing around is no longer cheap.
full member
Activity: 196
Merit: 100
February 08, 2014, 08:42:53 AM
I still believe counting the total bitcoin-day-destroyed is the most practical way to address this issue. In this case empty blocks are disadvantaged. I guess Satoshi thought the same? How this could be exploited?

Disadvantaged how? If later blocks take precedence over earlier blocks, because they have more bitcoin-days-destroyed, it makes it easier for someone who controls lots of bitcoin-days (like the FBI) to mount an attack.

The longest chain must always win. In case there are 2 or more branches with same length, the one with more bitcoin-days-destroyed will win.

So if you have a block with a lot of bitcoin-days-destroyed, you wait to release it while trying to build on it. "Selfish mining" where you win nearly 100% of the races, without the need to mount a Sybil attack.

The only tweak to "first received longest chain wins" that I can see being appropriate is to delay less than full blocks for the amount of time it would have taken to send a full block.

That said, there are other, bigger changes, which might work better. What if the block headers are propagated immediately, and the winner becomes based on the first header seen, so long as the entire block corresponding to that header is seen within X seconds (maybe 30 seconds)?

An even bigger change which is worth considering is to propagate blocks in erasure-coded pieces over UDP, allowing peers to download pieces from multiple peers at once a la bittorrent. (This could also be done using TCP, making some of the flow-control issues easier, but at the expense of significant inefficiency and rendering multicasting out of the question. Ideally I think I'd move everything or almost everything to UDP, with TCP being a fallback for those behind restrictive firewalls.)
legendary
Activity: 1792
Merit: 1111
February 07, 2014, 11:16:25 PM
I still believe counting the total bitcoin-day-destroyed is the most practical way to address this issue. In this case empty blocks are disadvantaged. I guess Satoshi thought the same? How this could be exploited?

Disadvantaged how? If later blocks take precedence over earlier blocks, because they have more bitcoin-days-destroyed, it makes it easier for someone who controls lots of bitcoin-days (like the FBI) to mount an attack.

The longest chain must always win. In case there are 2 or more branches with same length, the one with more bitcoin-days-destroyed will win.
full member
Activity: 196
Merit: 100
February 07, 2014, 10:49:25 PM
I still believe counting the total bitcoin-day-destroyed is the most practical way to address this issue. In this case empty blocks are disadvantaged. I guess Satoshi thought the same? How this could be exploited?

Disadvantaged how? If later blocks take precedence over earlier blocks, because they have more bitcoin-days-destroyed, it makes it easier for someone who controls lots of bitcoin-days (like the FBI) to mount an attack.
full member
Activity: 196
Merit: 100
February 07, 2014, 10:45:21 PM
This is TOTALLY useless. If the content of the junk is dynamic but deterministic (e.g. repeatedly hashing the last block), miners don't need to transfer the junk because everyone know the content. If the content is unspecified, all miners will fill it with 0s. So, again, they don't need to transfer the junk because everyone know the content.

The point is that any block with size less than the minimum size would be disallowed by the protocol. So it wouldn't matter if all the other nodes knew what the junk values would be.

Instead of actually sending junk data why not just sleep for the time it would have taken to receive the junk data?
Pages:
Jump to: