Pages:
Author

Topic: Bitcoin block size limit - page 2. (Read 422 times)

member
Activity: 180
Merit: 18
February 11, 2020, 07:18:17 AM
#5
My idea has been something like this a single patch that solves the blocksize issue permanently, X number of full blocks in a row = .1 mb blocksize increase.

The maybe it's 100 blocks in a row maybe it's more, I am sure others smarter than me have reasons for why there is a perfect number.

This way we don't see a troll able to spam up the blocks without paying a very large amount and they have to keep it up for 100 (or more) blocks in a row all for .1 increase at a time.

There could also be a way to shrink the size in the same fix if this brings concern to uncontrolled block size inflation. I'd suggest if 100 blocks are 1/5th block size cap for 200 blocks in a row shrink block size by .1mb.

This way the network scales on it's own and there is no need for a fight or a ritual for increase if we do hit that limit habitually.

Exactly block size could be adjusted just like the difficulty adjustment works for miners.

The problem with spam is not a big issue, because a spammer will have to pay fees for every microtransaction, which will make him spend a lot of money to make such a spam / ddos attack.
legendary
Activity: 4424
Merit: 4794
February 11, 2020, 03:17:33 AM
#4
the latter is better. increasing block size is easy but the result is not as good. for example with current Schnorr signature proposals we can compress transactions so that more tx can fit in the same size.
schnorr does not compress transactions.
schnorr makes it so a certain category of a mltisig script only uses one sig-length script.
if i was spending 4 UTXO of 4 addresses. i will still have 4 sigs.
if i had UTXO that were not multisig.. it wont be schorr
if i had UTXO that were old gen multisig.. it wont be schorr

schnorr is not a 'fix all' compression.. its a new tx format that people first need to pay into a multisig. to then be able to have a slim script when spending.
it does not apply to all tx types.

also there are flaws to schnorr which can make it ripe for abuse and scamming. when involving multiple parties. so it should not just be used randomly and trusted to be unbreakable

you aren't streaming a video! you are downloading data then verifying all of it. that is thousands of signature verifications and even more hashes to compute. the bigger the size, the more time it will take to verify blocks. and it is not just that, a node is also verifying a lot of transactions in its mempool so there is a lot of traffic that is being spent already that way.

now adays when a block solve is transmitted .. the whole block data is not sent. infact its just the header that is sent. the transactions would have already been relayed and say in nodes mempools long long before the block is even begun hashing. so all the fears of transaction verification bottlenecks due to blocks. are false.

its like your trying to paint a picture that people check the ware on their car tyres while the car is in motion.. sorry but no they check it even before they get into the car.
legendary
Activity: 3472
Merit: 10611
February 11, 2020, 01:10:34 AM
#3
Will they finally agree to increase the block size, or will try to find new ways to make transactions smaller in size, so they can fit in a small block size?
the latter is better. increasing block size is easy but the result is not as good. for example with current Schnorr signature proposals we can compress transactions so that more tx can fit in the same size.

Quote
Why it is so important to keep the size small 1-4MB? For a 1MB size block to be propagated on network within 10 minutes with current internet speed connections is not a hard task since most nodes can download 1MB per second. That's up to 600MB in 10 minutes. I am just referring about the abilities, no need of that large size, but why not making the block size more flexible, so it can handle more transactions when it is necessary?
you aren't streaming a video! you are downloading data then verifying all of it. that is thousands of signature verifications and even more hashes to compute. the bigger the size, the more time it will take to verify blocks. and it is not just that, a node is also verifying a lot of transactions in its mempool so there is a lot of traffic that is being spent already that way.
legendary
Activity: 2198
Merit: 1989
฿uy ฿itcoin
February 10, 2020, 11:33:22 PM
#2
Why it is so important to keep the size small 1-4MB? For a 1MB size block to be propagated on network within 10 minutes with current internet speed connections is not a hard task since most nodes can download 1MB per second. That's up to 600MB in 10 minutes. I am just referring about the abilities, no need of that large size, but why not making the block size more flexible, so it can handle more transactions when it is necessary?


At your rate the blockchain will expand by 84+ GB a day. Not every country has unlimited plans and not everybody can afford it. You'll also need to store everything (unless you run a pruned node). It will take you just over 12 days to fill up a 1TB HDD. Increasing the block size could be an option if LN can't handle the demand anymore but that'll be years from now (if that even is a scenario).
member
Activity: 180
Merit: 18
February 10, 2020, 07:14:37 PM
#1
If Bitcoin network transactions increase in number, so that the current block size limit is not enough to proceed all transactions in time, what will Bitcoin community do? Will they finally agree to increase the block size, or will try to find new ways to make transactions smaller in size, so they can fit in a small block size? Why it is so important to keep the size small 1-4MB? For a 1MB size block to be propagated on network within 10 minutes with current internet speed connections is not a hard task since most nodes can download 1MB per second. That's up to 600MB in 10 minutes. I am just referring about the abilities, no need of that large size, but why not making the block size more flexible, so it can handle more transactions when it is necessary?

Since Satoshi was explaining that it's not required to have a node with full data.

Whitepaper chapter 7. Reclaiming Disk Space

A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.

Whitepaper chapter 8. Simplified Payment Verification

It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he's convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it's timestamped in. He can't check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirm the network has accepted it.


It's been a long time since Satoshi was explaining the block size limit

jgarzik proposed to make a patch

We should be able to at least match Paypal's average transaction rate...

Code:

diff --git a/main.h b/main.h
index c5a0127..c92592a 100644
--- a/main.h
+++ b/main.h
@@ -14,7 +14,10 @@ class CBlockIndex;
 class CWalletTx;
 class CKeyItem;
 
-static const unsigned int MAX_BLOCK_SIZE = 1000000;
+static const unsigned int TX_PER_MINUTE = 1400;
+static const unsigned int TX_AVG_SIZE_GUESS = 256;
+static const unsigned int MAX_BLOCK_SIZE =
+   TX_PER_MINUTE * TX_AVG_SIZE_GUESS * 10 * 2;
 static const unsigned int MAX_BLOCK_SIZE_GEN = MAX_BLOCK_SIZE/2;
 static const int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
 static const int64 COIN = 100000000;


URL: http://yyz.us/bitcoin/patch.bitcoin-block-sz-limit


First Satoshi warned that increasing Block size limit was a bad idea at the given time.


+1 theymos.  Don't use this patch, it'll make you incompatible with the network, to your own detriment.

We can phase in a change later if we get closer to needing it.

Satoshi Nakamoto



Than Satoshi gives an example how larger block can be settled on the protocol.

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.

Satoshi Nakamoto


Original thread: https://bitcointalksearch.org/topic/patch-increase-block-size-limit-1347
Pages:
Jump to: