Author

Topic: How will XT be good with regards to the packet frame? (Read 934 times)

newbie
Activity: 28
Merit: 0
Multiple solutions have been proposed to alleviate the block size debate, like 8MB blocks, 20MB blocks, and dynamic block size limits, among others. However, Bitcoin XT is gaining traction as a viable solution as it would create a hard fork in the protocol by permanently splitting the blockchain into two different ledgers.
sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
A block never fit into a single packet.
The relay network protocol fits a significant fraction of all blocks into a single packet (about a third, ... about 60% fit in two packets), in fact.



Is this because blocks are extremely small, block headers are sent, or some kind of large packet/jumbogram is used?
staff
Activity: 4284
Merit: 8808
A block never fit into a single packet.
The relay network protocol fits a significant fraction of all blocks into a single packet (about a third, ... about 60% fit in two packets), in fact.

sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
Since, at the time, a block was sent as a single network message, there was indirectly a 32 MB limit on the size of the block, but there wasn't any specific size limit specifically intended for blocks.
Got it - so blocks are not sent as a single network message now (I haven't kept up with the latest changes)?

I haven't followed the recent changes very closely, so I'm not sure what has been implemented yet, but there was an idea that was being discussed about sending the block headers first, separate from the rest of the block. Since most nodes would already have heard about most of the transactions that are in the block, back when they relayed them in the first place, nodes would only need to request the few transactions that they were missing.  This would reduce the overall bandwidth needed to operate a full node, and would increase the speed at which a new block was propagated through the network. Obviously for this to happen, the block would no longer be a single network message.

I'm not sure what the plans were for the initial download on a new node, but I thought "headers first" had already been implemented there?

I'm led to believe that headers are sent first; bitcoin-cli getblockchaininfo returns a number of headers roughly equal to the number of blocks in total, even during a sync. I'm not certain, but my assumption would be that this is in fact header-first transfer, and not just bad wording. (I'm also assuming it's unrelated to the inv messages sent over P2P to describe available blocks).

Where does the 32MB network message limit come from? 32MiB due to the number of bits in a length field? (also not in regard to a blocksize limitation)
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
I'm not sure what the plans were for the initial download on a new node, but I thought "headers first" had already been implemented there?

Yes I am pretty sure "headers first" has already been implemented (in 0.10.0) I just wasn't sure about the limits in regards to the block size (and wasn't meaning to get into the whole block size debate thing).
legendary
Activity: 3472
Merit: 4801
Since, at the time, a block was sent as a single network message, there was indirectly a 32 MB limit on the size of the block, but there wasn't any specific size limit specifically intended for blocks.
Got it - so blocks are not sent as a single network message now (I haven't kept up with the latest changes)?

I haven't followed the recent changes very closely, so I'm not sure what has been implemented yet, but there was an idea that was being discussed about sending the block headers first, separate from the rest of the block. Since most nodes would already have heard about most of the transactions that are in the block, back when they relayed them in the first place, nodes would only need to request the few transactions that they were missing.  This would reduce the overall bandwidth needed to operate a full node, and would increase the speed at which a new block was propagated through the network. Obviously for this to happen, the block would no longer be a single network message.

I'm not sure what the plans were for the initial download on a new node, but I thought "headers first" had already been implemented there?
legendary
Activity: 3472
Merit: 4801
Note that the debate about when and how to go about increasing the block size isn't something new.  This is a discussion that has pretty much been continuous since the limit was first put in place. As early as October 2010 (just 3 months after the limit was first added) Satoshi was responding to questions about it.

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Since, at the time, a block was sent as a single network message, there was indirectly a 32 MB limit on the size of the block, but there wasn't any specific size limit specifically intended for blocks.

Got it - so blocks are not sent as a single network message now (I haven't kept up with the latest changes)?
legendary
Activity: 3472
Merit: 4801
Originally, there was no limit.
I thought it was 32MB (and was later reduced to 1MB) - are you sure no limit?

There was a 32 MB limit on maximum network message size.  This applied to anything sent in a single network message. Since, at the time, a block was sent as a single network message, there was indirectly a 32 MB limit on the size of the block, but there wasn't any specific size limit specifically intended for blocks.

Then the MAX_BLOCK_SIZE limitation was added to the code in July 2010 with a 1 MB limit.
legendary
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
Originally, there was no limit.

I thought it was 32MB (and was later reduced to 1MB) - are you sure no limit?
legendary
Activity: 3472
Merit: 4801
whoops. Did a calculation error when calculating on this.

Was there a specific reason for the 1MB limit?

Originally, there was no limit.

Then, after bitcoin had been around for a while, there was a discussion about possible attacks if a miner were to create excessively large blocks.

1 MB was arbitrarily chosen as something that was big enough to last a while, yet small enough to avoid significant issues if many blocks were created that size. The was done with the concept that it could always be increased later when necessary.  Clearly there wasn't enough thought put into how difficult it might be to gain agreement on that increase.
full member
Activity: 129
Merit: 119
whoops. Did a calculation error when calculating on this.

Was there a specific reason for the 1MB limit?
sr. member
Activity: 392
Merit: 268
Tips welcomed: 1CF4GhXX1RhCaGzWztgE1YZZUcSpoqTbsJ
A block never fit into a single packet. The MTU of an eth frame is 1.526 kilobytes, not megabytes. It takes around 700 packets/frames to send a block assuming all frames are up to the MTU.

Also, some odd radio links designed for extremely low signal conditions take minutes to send data.
full member
Activity: 129
Merit: 119
I read about this XT addition, and wonder: Is it really a good idea?

A normal Ethernet frame do have a MTU of 1526, and with headers and other overhead, the resulting frame will be roughtly 1500 bytes large. Subtracting the IP header from this, which is normally 20 bytes, but lets be harsh and overdo a Little bit, and substract 30 bytes. Then we have 1470 bytes left.
Now lets add 1 MB of transactions: 446 bytes.
Also add the block header and block hashes and signatures, and you will scrape off a bit of that too.

Imagine then sandwiching this in a VPN or anything, and you understand why XT is a bad idea because a block will no longer fit into a single packet, and the packet needs to be fragmented.


Wouldn't it better to hardfork the chain to have a faster bliock distribution rate, lets say instead of 10 minutes you hash a block each minute. Then you get the same increased transaction capacity, but with still each block fitting into a single packet. I dont Think there is a risk of softfork due to a netsplit because I have never Heard about a link that takes more than 1 minute to distribute a packet.
Jump to: