Author

Topic: Block size targeting algo based on average propagation times for blocks? (Read 922 times)

legendary
Activity: 1232
Merit: 1094
The detection mechanism for orphan blocks makes sense for miners, but it does not take into consideration the entire network. An overwhelming majority of blocks created today are by mining pools. They are hosted in datacenters with excellent network connectivity. Hence propagation between miners is usually fast. However, allowing the block size to grow reduces the usability of the bitcoin network. By increasing the block size, clients (especially full nodes) are forced to store extra transactions. This cost can not be reliably calculated by orphan rates alone, because these nodes do not mine.

Verification of the chain could be performed in a distributed way by providing a way to propagate proof that the chain is fake.

You need a way to say that data missing though.  If 2 chains were "in combat" you could have a system where a chain can claim the other chain has lost data.

legendary
Activity: 1722
Merit: 1217
The detection mechanism for orphan blocks makes sense for miners, but it does not take into consideration the entire network. An overwhelming majority of blocks created today are by mining pools. They are hosted in datacenters with excellent network connectivity. Hence propagation between miners is usually fast. However, allowing the block size to grow reduces the usability of the bitcoin network. By increasing the block size, clients (especially full nodes) are forced to store extra transactions. This cost can not be reliably calculated by orphan rates alone, because these nodes do not mine.

damn it. good point.
legendary
Activity: 2058
Merit: 1452
The detection mechanism for orphan blocks makes sense for miners, but it does not take into consideration the entire network. An overwhelming majority of blocks created today are by mining pools. They are hosted in datacenters with excellent network connectivity. Hence propagation between miners is usually fast. However, allowing the block size to grow reduces the usability of the bitcoin network. By increasing the block size, clients (especially full nodes) are forced to store extra transactions. This cost can not be reliably calculated by orphan rates alone, because these nodes do not mine.
legendary
Activity: 1722
Merit: 1217
if you leave it to the miners to decide the orphan rate, they can always game the system to their advantage.

The network would simply reject dishonest blocks just like it does with miners who would attempt to game the difficulty re-targets.
legendary
Activity: 2058
Merit: 1452
if you leave it to the miners to decide the orphan rate, they can always game the system to their advantage.
legendary
Activity: 1722
Merit: 1217
Ideally it should be possible to compute difficulty updates based purely on the blockchain.

One way to measure block propagation time is orphan rates.  At the moment, there is only a link to the previous block.  However, you could add a 2nd optional orphan link.

I suggest the following protocol update.

In the coinbase transaction add an optional "ORP" field, like the pay to script hash system.

/ORP<32 bytes hash of orphan>/ means that there was an orphan with the given hash within the current difficulty period.

/ORP/ would just be a show of support, and optional

Miners should reject blocks which provides a 32 byte hash, if the hash doesn't target a first order orphan within the last difficulty period (each orphan can be targeted once).  

Orphan headers must be stored by nodes, but the block data can be discarded.  This info is required to verify that the orphan was real.

If 95% of the network adds the ORP field, then larger than 1MB blocks would be allowed.  This could be a "point of no return" event.  

However, since the changing the maximum block size is a hard fork anyway, maybe a separate vote isn't required.

If the orphan rate was less than 7.5%, then MAX_BLOCK_SIZE would increase by 10% and if more than 15% would drop by 10%.  The exact numbers are open to discussion.  Updates would happen at the end of a difficulty period.

Most blocks would just have the /ORP/ field at most, so it doesn't use up much of the coinbase.

What would be great would be if unbalanced merkle trees were allowed, so that you don't need to bury the coinbase so deep in the merkle tree.

If miners wanted to increase the block size, a mining cartel could try to enforce a rule about ORP blocks.  If > 50% refused to include links to orphans then all other miners might decide to drop them.

However, users of the network would see that orphans were happening and not being included.  Pools which agree to include them could try to draw off hashing power.

The key point is that it puts a clear definition of network problems into the blockchain.  High orphan rates are inherently linked to bandwidth being the limiting factor.

[Edit]

It might be worth also adding a rule that the median block size within the difficulty period must be > 50% of MAX_BLOCK_SIZE or no increase happens.  This is to prevent the limit growing during times when the limit wasn't being used anyway.

it seems airtight. I cant think of any way that an attacker could artificially effect the orphan rate with out great cost, cost that would be more than prohibitive enough. Do you know if the devs have anything like this on their radar?
legendary
Activity: 1232
Merit: 1094
Ideally it should be possible to compute difficulty updates based purely on the blockchain.

One way to measure block propagation time is orphan rates.  At the moment, there is only a link to the previous block.  However, you could add a 2nd optional orphan link.

I suggest the following protocol update.

In the coinbase transaction add an optional "ORP" field, like the pay to script hash system.

/ORP<32 bytes hash of orphan>/ means that there was an orphan with the given hash within the current difficulty period.

/ORP/ would just be a show of support, and optional

Miners should reject blocks which provides a 32 byte hash, if the hash doesn't target a first order orphan within the last difficulty period (each orphan can be targeted once).  

Orphan headers must be stored by nodes, but the block data can be discarded.  This info is required to verify that the orphan was real.

If 95% of the network adds the ORP field, then larger than 1MB blocks would be allowed.  This could be a "point of no return" event.  

However, since the changing the maximum block size is a hard fork anyway, maybe a separate vote isn't required.

If the orphan rate was less than 7.5%, then MAX_BLOCK_SIZE would increase by 10% and if more than 15% would drop by 10%.  The exact numbers are open to discussion.  Updates would happen at the end of a difficulty period.

Most blocks would just have the /ORP/ field at most, so it doesn't use up much of the coinbase.

What would be great would be if unbalanced merkle trees were allowed, so that you don't need to bury the coinbase so deep in the merkle tree.

If miners wanted to increase the block size, a mining cartel could try to enforce a rule about ORP blocks.  If > 50% refused to include links to orphans then all other miners might decide to drop them.

However, users of the network would see that orphans were happening and not being included.  Pools which agree to include them could try to draw off hashing power.

The key point is that it puts a clear definition of network problems into the blockchain.  High orphan rates are inherently linked to bandwidth being the limiting factor.

[Edit]

It might be worth also adding a rule that the median block size within the difficulty period must be > 50% of MAX_BLOCK_SIZE or no increase happens.  This is to prevent the limit growing during times when the limit wasn't being used anyway.
legendary
Activity: 1722
Merit: 1217
There's also a problem of how each node can preform a census on the ENTIRE network. Each node has limited number of connections to the network, so it can only "see" the propagation time of the nodes that it's connected to. This raises a problem of clients being forked because one set of clients may see the max block size as being 1 MB, but another set of clients may see the maximum block size to be 1.1 MB. As with any census of online networks, you obviously can't rely on hearsay from other nodes, because they can be feeding you false information. Once a block has the size between the interval of calculated maximums, the network may become forever split.

well that could just be solved by having the person who minted a block record the time that he releasing a block as well as the exact time that he received the block before it. Sure sometimes it would be received by a miner who was directly connected to the peer who minted the last block and sometimes it would have to travel through several nodes before coming to him, so this data would never tell us how long it took for a specific block to propagate but this would average out when taking many data points into consideration.

still those other 2 problems you mentioned seem pretty damning.
legendary
Activity: 2058
Merit: 1452
There's also a problem of how each node can preform a census on the ENTIRE network. Each node has limited number of connections to the network, so it can only "see" the propagation time of the nodes that it's connected to. This raises a problem of clients being forked because one set of clients may see the max block size as being 1 MB, but another set of clients may see the maximum block size to be 1.1 MB. As with any census of online networks, you obviously can't rely on hearsay from other nodes, because they can be feeding you false information. Once a block has the size between the interval of calculated maximums, the network may become forever split.
legendary
Activity: 1722
Merit: 1217
The propagation time is VERY fast, so I can't see how you can reliably calculate the propagation time.

that is the question can it be reliably calculated. I hope someone smarter than me knows the answer to this.

oh also ideally we would have larger blocks at some point in the future, would 10mb blocks still be really fast?
The problem is that variance of each computer's internal clock is greater than, or equal to the time it takes to transmit one block. Another problem is for malicious attackers to set up fake nodes (on compromised systems or cloud hosting services), to skew the network propagation time in their favor.

yes both problems had crossed my mind
legendary
Activity: 2058
Merit: 1452
The propagation time is VERY fast, so I can't see how you can reliably calculate the propagation time.

that is the question can it be reliably calculated. I hope someone smarter than me knows the answer to this.

oh also ideally we would have larger blocks at some point in the future, would 10mb blocks still be really fast?
The problem is that variance of each computer's internal clock is greater than, or equal to the time it takes to transmit one block. Another problem is for malicious attackers to set up fake nodes (on compromised systems or cloud hosting services), to skew the network propagation time in their favor.
legendary
Activity: 1722
Merit: 1217
The propagation time is VERY fast, so I can't see how you can reliably calculate the propagation time.

that is the question can it be reliably calculated. I hope someone smarter than me knows the answer to this.

oh also ideally we would have larger blocks at some point in the future, would 10mb blocks still be really fast?
legendary
Activity: 2058
Merit: 1452
The propagation time is VERY fast, so I can't see how you can reliably calculate the propagation time.
legendary
Activity: 1722
Merit: 1217
Would it be possible, instead of raising the bitcoin block size with repeated hard forks as technology advances, to create an algorythm where the block size was adjusted based on the amount of time it took for a block of a given size to propagate across some percentage of the network?

So for example lets say we want blocks to take 4 minutes to propagate across 90% of the network, that that is healthy and optimal and gives miners plenty of time (6 minutes) to mine on a given block. Could we create a block size re-targeting algorithm that was similar to the difficulty re targeting algorithm? So if blocks are taking 3 minutes to propagate than we would increase the max block size, and if they are taking 5 minutes to propagate than we decrease the block size.

Jump to: