Pages:
Author

Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ... - page 7. (Read 105082 times)

legendary
Activity: 1162
Merit: 1007
legendary
Activity: 1512
Merit: 1012
There is no realistic scenario where a network capped permanently at 1MB can have meaningful adoption whilst still maintaining direct access to the blockchain by individuals.

there is no realistic scenario where plastic card with 1960 chip on it can have meaningfull adoption.

Wait ... mmmmmh ?  Roll Eyes
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
That's because the miner chose to not include any transactions in the block, except the coinbase one of course. It doesn't mean there wasn't any waiting in the memory pool.

Ah. Snagged by an MP sockpuppet!
legendary
Activity: 1652
Merit: 1016
However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?


https://blockchain.info/block/00000000000000001672c2c047085db722e577902c48d018cd88f1d46359fa28


Just an example. There is heaps of empty blocks. That's why anyone saying it would be urgent is a douchebag and likely has some other agenda.

That's because the miner chose to not include any transactions in the block, except the coinbase one of course. It doesn't mean there wasn't any waiting in the memory pool.
member
Activity: 212
Merit: 22
Amazix
However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?


https://blockchain.info/block/00000000000000001672c2c047085db722e577902c48d018cd88f1d46359fa28


Just an example. There is heaps of empty blocks. That's why anyone saying it would be urgent is a douchebag and likely has some other agenda.
full member
Activity: 190
Merit: 100
Very long read OP, but well worth it. As I see some old coins like Litecoins have effectively 4MB limit for 10 minutes, so I dont understand why the 1MB limit for 10 minutes defend so many people (looking at the 20MB Gavin fork pool)
newbie
Activity: 2
Merit: 0
At this time the software handles blocks up to 4GB and has been extensively tested with blocks up to 20MB.  So there's not really a problem in terms of API, nor a need to chop discovered batches of transactions up into bits.

The maximum message size of the p2p protocol is 32MB, which caps the block size implicitly in addition to the explicit 1MB limit.

Beyond all other insanity, 4GB blocks would be completely worthless due to the sigops limit.
thy
hero member
Activity: 685
Merit: 500
OP Very interesting topic DeathAndTaxes, where are we at the moment, whats the avg number of transactions per second for lets say the last month and what was the avg 6 months ago and a year ago as a comparison ?
full member
Activity: 209
Merit: 100
I just wanted to bump this great article for its lucid explanation of why the blockchain needs to grow.
donator
Activity: 668
Merit: 500
As I read the opposition they fall into three groups, the Trolls, the Suckers, and the Scammers.

The Trolls know the arguments are nonsense and are yelling anyway because it makes them feel important. 

The Suckers don't know nonsense when they hear it and are yelling because they're part of a social group with people who are yelling.

The Scammers have thought of a way to make a profit ripping people off during or after a hard fork, but it won't work unless there are Suckers who think that the coins on the losing side of the fork aren't worthless, so they keep yelling things to try to keep the Suckers confused as to the facts.
lol, so true!
legendary
Activity: 2128
Merit: 1073
Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
Are you talking about the legacy peer-to-peer protocol in Bitcoin Core or about the new, sensible implementation from Matt Corallo?

https://bitcointalksearch.org/topic/how-and-why-pools-and-all-miners-should-use-the-relay-network-766190
legendary
Activity: 924
Merit: 1132
At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.

As I said, 20MB has been extensively tested.  4GB is okay with the software but subject to practical problems like propagation delays taking too long for blocks crossing the network.  4GB blocks would be fine if everybody had enough bandwidth to transmit and receive them before the next block came around, but not everybody does. Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
hero member
Activity: 709
Merit: 503
At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.
legendary
Activity: 924
Merit: 1132
At this time the software handles blocks up to 4GB and has been extensively tested with blocks up to 20MB.  So there's not really a problem in terms of API, nor a need to chop discovered batches of transactions up into bits.

The problem is sort-of political; there are a bunch of people who keep yelling over and over that an increase in block size will lead to more government control of the bitcoin ecosystem (as though there was ever going to be an economically significant bitcoin ecosystem that didn't have exactly that degree of government control) and that they don't want just any riffraff in a faraway country to be able to use their sacred blockchain for buying a cuppa coffee (as though allowing anybody anywhere to buy anything at any time somehow isn't the point of a digital cash system).

Neither point makes any damn sense; As I read the opposition they fall into three groups, the Trolls, the Suckers, and the Scammers.

The Trolls know the arguments are nonsense and are yelling anyway because it makes them feel important. 

The Suckers don't know nonsense when they hear it and are yelling because they're part of a social group with people who are yelling.

The Scammers have thought of a way to make a profit ripping people off during or after a hard fork, but it won't work unless there are Suckers who think that the coins on the losing side of the fork aren't worthless, so they keep yelling things to try to keep the Suckers confused as to the facts.

hero member
Activity: 709
Merit: 503
I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.
If one wants/needs to transmit a block of transactions that a miner has discovered that meets the required difficulty then one calls an API to do it.  That API has a maximum size.  If the block is larger than that size then the block is chopped into pieces and then reassembled at the receiving end.  Whether the block is held in a single contiguous buffer is irrelevant although almost certainly common.

The point is until the code do the chopping and reconstruction is ready blocks are limited in size to the API maximum.  Given a large enough sustained workload, i.e. incoming transactions, the backlog will grow without bound until there's a failure.  Having Bitcoin fail would not be good.
legendary
Activity: 2128
Merit: 1073
I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.
hero member
Activity: 709
Merit: 503
The ability to handle a large block is a functional topic; e.g. apparently there is an API limit which will force blocks to be transmitted in fragments instead of one large block.  If we want blocks larger than this limit then we have no choice but to do the code to handle fragmenting and reconstructing such large blocks.  Having an artificial block size maximum at this API limit is necessary until we do that code.  Alternatively I suppose we could look at replacing the API with another one (if there even is one) that can handle larger blocks.

The desire/need for large blocks is driven by the workload.  If the workload is too much for the block size then the backlog https://blockchain.info/unconfirmed-transactions will grow and grow until the workload subsides; this is a trivial/obvious result from queuing theory.  Well, I suppose having some age limit or other arbitrary logic dropping work would avoid the ever-growing queue but folks would just resubmit their transactions.  Granted some would argue the overload condition is ok since it will force some behavior.

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.

All of this is largely independent of figuring out how to suppress spam transactions.
legendary
Activity: 924
Merit: 1132
VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

That is a very valuable observation.  An 'adaptive' block size limit would set a limit of some multiple of the observed transaction rate, but most of its advocates (including me) haven't bothered to look up what the factor ought to be. what you looked up above presents real live information from a functioning payment system. 

The lesson being that an acceptable peak txn rate for a working payment network is about 12x its average txn rate. 

Which, in our case with average blocks being around 300 KB, means we ought to have maximum block sizes in the 3600KB range. 

And that those of us advocating a self-adjusting block size limit ought to be thinking in terms of 12x the observed utilization, not 3x the observed utilization.

donator
Activity: 1218
Merit: 1079
Gerald Davis
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.

That is correct.  There are no zero txn blocks because a block is invalid without a coinbase so people saying 'empty' blocks are referring to a block with just the coinbase txn.  This will occur periodically due to the way that blocks are constructed.  To understand why one needs to dig a little bit into what happens when a new block is found.  

A pool server may have thousands of workers working on the current block when a new block is found making that work stale. The server needs to quickly update all its workers to 'new work' and the longer that happens the more revenue that is lost.  If a new block is found on average in 600 seconds and it takes the pool just 6 seconds to update all its workers than its actual revenue will be 0.5% lower than its theoretical revenue.   So pools want to update all their workers as quickly as possible to be as efficient as possible.

To update workers to work on a new block the server must remove all the txn confirmed in the prior block from the memory pool, then organize them into a merkle tree and compute a new merkle root.  Part of that merkle tree is the coinbase txn which is unique for each worker (that is how your pool knows your shares are yours).  A different coinbase means a different merkle tree and root hash for each worker which can be time consuming to compute.  So to save time (and reduce stale losses) the pool server will compute the simplest possible merkle tree for each worker which is a single txn (the coinbase) push that out to all workers.  Compute the full txn set merkle tree and then provide that to workers once they request new work.  However if a worker solves a block with that first work assigned it will produce an 'empty' (technically one txn) block.

This is just one example of how a 1MB limit will never achieve 1MB throughput.  All the numbers in the OP assume an upper limit which is the theoretical situation of 100% of miners producing 1MB blocks with no orphans, inefficiencies, or stale work.  In reality miners targeting less than 1MB, orphans, and inefficiencies in the network will mean real throughput is going to be lower than the limit.  VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

legendary
Activity: 1652
Merit: 1016
However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.
Pages:
Jump to: