Author

Topic: small blocks actually making it harder to run a node? (Read 1154 times)

legendary
Activity: 3472
Merit: 4801
Smaller blocks with shorter generation times would solve this.

No, it wouldn't.

This would be a better solution that creating monolithic blocks that may not be filled.

If they are not filled, then they are not monolithic.

Blocks are only as big as the number of bytes worth of transactions they contain.  When people talk about increasing the blocksize, theya re talking about increasing the maximum size allowed per block.  This allows the block to contain more transactions because it will require more transactions before it encounters the limit.

Every block also has an 80 byte header.

Lets look at an example. Lets say:

  • You have 9999920 bytes worth of unconfirmed transactions
  • You allow a maximum block size of 10000000 bytes
  • The block is solved after 10 minutes

Ten minutes later you confirm all the transactions in a single block. After you add the one 80 byte header you will have added 10000000 bytes to the blockchain.

If, instead:
  • You have the same 9999920 bytes worth of unconfirmed transactions
  • You allow a maximum block size of 500000 bytes
  • A block is solved every 30 seconds

Then after 20 blocks you'll again have added 10000000 bytes to the blockchain, BUT . . .

Since each block required a header, you'll only have confirmed:

499980 * 20 = 9999600 bytes worth of transactions.

Since you started with 9999920 bytes worth of unconfirmed transactions and you only confirmed 9999600 bytes, you still have:
9999920 - 9999600 = 320 bytes worth of unconfirmed transactions.

You could solve this problem of confirming less transactions by allowing the blocksize to be a bit bigger than 500000 bytes, but then the total number of bytes added to the blockchain every 10 minutes would be larger.

Having smaller blocks that are generated faster will either result in a blockchain that grows faster, or less transactions confirmed in a given amount of time.
legendary
Activity: 2814
Merit: 2472
https://JetCash.com
Smaller blocks with shorter generation times would solve this. This would be a better solution that creating monolithic blocks that may not be filled.
staff
Activity: 4284
Merit: 8808
Smaller mempool just means that unconfirmed transactions need to be dropped from the mempool sooner
And that the minimum feerate to relay and get into the mempool in the first place will go up.
legendary
Activity: 3472
Merit: 4801
The mempool size is regulated by how many transactions are sent

This is not true.

The mempool size is regulated by the node software that is storing the unconfirmed transaction.  Smaller mempool just means that unconfirmed transactions need to be dropped from the mempool sooner. Larger mempool means you can keep unconfirmed transactions in your mempool longer. If the max block size is 1 megabyte, and you want your node to store at least 1 day's worth of transactions, then you have a 144 megabyte mempool.  If the max block size is 4 megabyte, and you want your node to store at least 1 day's worth of transactions, then you have a 576 megabyte mempool.  Bigger block means bigger mempool if you want to consistently store a day's worth (or whatever timeframe you choose) of confirm-able transactions.

If demand for transactions exceeds the number that can be processed by the network, the demand will naturally decrease again as it becomes too difficult expensive for people to use their transactions.

Correct.  This is what people are talking about when they refer to a "fee market".  However, it has nothing to do with the size of the mempool.

If you want to say that smaller blocks result in higher fees per byte, then I think everyone is in agreement on that.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
The mempool size is regulated by how many transactions are sent - if demand for transactions exceeds the number that can be processed by the network, the demand will naturally decrease again as it becomes too difficult for people to use their transactions.

Good point.  I was hoping that there would have been yet another reason for the community to put more urgency on the scaling issue... but this ^ really should be reason enough , for anyone who thinks more users is a good idea.
hero member
Activity: 546
Merit: 500
The mempool size is regulated by how many transactions are sent - if demand for transactions exceeds the number that can be processed by the network, the demand will naturally decrease again as it becomes too difficult for people to use their transactions.

If a mempool as ridiculously large as it is now is only 182MB, it's negligible for the overall running of nodes.

Even at the current blocksize it would take less than two days for the blockchain to grow that much and the mempool can't naturally grow more than a few times larger than it is now.
legendary
Activity: 2786
Merit: 1031
Quote
the 1MB block size is forcing space usage away from cheap and abundant hard disk storage and into the forever backlogged mempool of pending transactions, already at 182MB on my local node as I write this

The 'argument' is so childish it made me laugh.
staff
Activity: 4284
Merit: 8808

Smaller blocks reduce memory usage, considerably.

The mempool does not increase memory usage because it has a fixed size and is shared with the dbcache. If blocks were larger the mempool would also need to be larger by an equivalent factor so would many other parts of the system.

Moreover, no matter what the blocksize was someone could inexpensively produce a bunch of transactions between blocks and make your mempool as big as you want.
legendary
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
Jump to: