Author

Topic: Block size limit questions (Read 1846 times)

sr. member
Activity: 438
Merit: 291
May 20, 2012, 01:54:12 PM
#12
some of the spam (satoshi dice, miner's taking 2 bitcent payouts, etc)

This is a feature! And I think will be a lot more soon...
sr. member
Activity: 438
Merit: 291
May 20, 2012, 01:51:11 PM
#11
Why do you think that this issues is a long way off?

Just looking at today and there is a 450k block:
http://blockchain.info/block-index/229209/00000000000006bc9956fe5ffc47c310a4270f560866ee86d8b5b30f75ff1ee8
and many in the 200k+ range.

All it will take is a couple more "satoshidice.com" type services that are taking advantage of the very cheap transaction fees and could start to hit it.

Do not think it is a bad thing having the limit though as 52gig a year of blockchain would soon cause all sorts of other issues!

legendary
Activity: 1526
Merit: 1134
May 18, 2012, 12:24:37 PM
#10
This means the inflation schedule can be much better enforced by staying decentralized, correct?

Yes, but I am not expecting key players to leave behind Satoshis code any time soon: miners, trading platforms and merchants should all be sticking with it. So even if mobile and desktop users followed some kind of inflationary fork without realizing, you'd still have to convince a majority of the miners, AND the merchants, AND the exchange operators.

That said, I expect using libraries like bitcoinj in combination with a regular Satoshi node to be quite common in future just because the programming model is simpler.

I don't think we need to worry about the block size limit any time soon. There are quite a few ways of using bitcoin that let you push non-time-sensitive transactions off to the night time when blocks should be less full - even very simple tricks like that could buy plenty of time to introduce a hard forking change.
sr. member
Activity: 461
Merit: 251
May 15, 2012, 06:58:21 PM
#9
Thanks for the excellent responses!

Still it is a totally non-issue.  Block size is 500 KB.  Average tx is ~ 500 bytes.  So the current block size is good for ~180K daily tx.  We are a small fraction of that.  If (due to economic pressure) some of the spam (satoshi dice, miner's taking 2 bitcent payouts, etc) was reduced we likely wouldn't even be 2K tx.
Ah, didn't realize so much of it was spam that wouldn't occur if transactions weren't basically free.  Still, though, 180K transactions/day is only ~2tps, or 0.1% of Visa, so hopefully this issue won't arise too far into the future Smiley

Another reason I can think to keep the limit is I believe the client software that talks with the tx servers would be engaging in real-time audits (for OT, anyway), and would thus require running a bitcoin client (something the average PC would be able to do because of the block size limit).  While smart phones would use SPV (or the merkle tree of open transactions gmaxwell just mentioned) to audit, there would still be a lot more fully verifying clients out there.  This is important, I think, because

Lightweight clients can't efficiently calculate the size of the coinbase for a block without downloading the whole block and then downloading the dependencies of every transaction in that block, along with the Merkle branches linking them to the relevant block headers (which may also need to be fetched because I think in future lightweight clients will throw away very old headers).
This means the inflation schedule can be much better enforced by staying decentralized, correct?
legendary
Activity: 2506
Merit: 1010
May 15, 2012, 06:54:27 PM
#8
No. IIRC Mike Hearn supports moving most nodes to SPV.

Vocabulary / acronym of the day:

SPV - Simplified Payment Verification
 - http://en.bitcoin.it/wiki/Scalability#Simplified_payment_verification
staff
Activity: 4284
Merit: 8808
May 15, 2012, 06:13:51 PM
#7
First— do you mean the 500k soft target or the million byte protocol rule?

The soft target is trivially lifted a node at a time. I expect the default soft limit will change once the network is consistently producing blocks up against that limit.

So I'll assume you mean the protocol rule—

OTOH, if this infrastructure isn't available when the block size limit is bumped up against and transactions start getting delayed and expensive, I doubt developers will be able to resist demands to increase the limit.

I haven't done the benchmarking to fully figure out exactly where a standard PC peters off, but I'm pretty sure they can process somewhat more than the current limit, at least if they're SSD equipped.  So even if you're a full card-carrying member of my Church of Forever Decentralization,  whos doctrines requires that the maximum block sizes stay quite small,  you could still support a bit of a bump.

Quote
pushing fees up and txs to occur off the blockchain on, e.g. Open Transactions servers

It's worth mentioning that beyond escaping the limits external things can have other advantages too.  For example, even getting a _single_ confirmation in Bitcoin (the minimum required to resist reversal attacks without using a trusted certification serice) can take a long time— 10 minutes is an _average_, but 30 minutes or longer happens about 7 times per day, an hour or longer every 2.8 days, etc.    And even though Bitcoin with the block size limits removed could be coerced to insane scaling levels, it would be a fairly storage and computation inefficient way to process all the world's transactions.   D'aniel also points out the considerable privacy/anonymity advantages other systems can have over Bitcoin (and add to Bitcoin when used along with it)

Quote
Or will it be raised somewhat after some scalability optimizations are implemented?

The limit can't be raised at all without a hardforking change (old nodes will not accept the new chain at all once the first oversized block is mined).  

It's not sufficient to change miners, as DeathAndTaxes suggests— the lifting the 1M protocol rule is a change unlike the BIP16/P2SH change which was fully compatible with old nodes. It's technically the same kind of change needed to adjust Bitcoin from 21m total BTC to 42m total BTC (though obviously not politically equal).   Every single piece of Bitcoin software produced would have to be updated to allow the oversized blocks

If the Bitcoin system were to take a hardforking change, switching to Ed25519 would remove ECC signature validation as a performance bottleneck, as  a fast quadcore desktop from today can do about 50k Ed25519 validates per second, compared to perhaps a thousand for the curve we use... though the random IO is still an issue.

More recently a number of people have independently invented the idea of committing to a merkle tree of open transactions.  If we do adopt some form of this it would allow the creation of nodes which are someplace in between SPV and a pruned full node in terms of security and decentralization benefit— so lower operating costs for nodes that validate. (In particular these nodes would have greatly reduced storage requirements)

Quote from: Theymos
No. IIRC Mike Hearn supports moving most nodes to SPV. My impression was that Satoshi also expected most nodes to use SPV. Not sure about the opinions of other developers besides gmaxwell.

Indeed, and Mike's position has gotten us (rightfully) flamed as not-decenteralized by e.g. Dan Kaminsky.

Gavin and Jeff have taken less strong positions than I have on the importance (and viability) of maintaining decenteralization in Bitcoin.  Although I expect to convince them eventually,  I think _everyone_ is in a wait and see mode.  Who knows what will happen?  At the moment I would aggressively argue against raising the limit— without it I don't see any alternative to Bitcoin becoming a particularly inefficient distributed system of establishment central banks— but I fully admit my position may change as things develop.  

I expect most Bitcoin users by count to be not even SPV— I expect most by count to be semi-SPV thin-clients (which may connect to a couple independent services). But expecting most users to be on not nodes does not preclude there being hundreds of thousands of nodes which perform complete validation, but gigabyte blocks surely would.


donator
Activity: 1218
Merit: 1079
Gerald Davis
May 15, 2012, 05:47:22 PM
#6
Change in blockchain size must be supported by super majority of miners to avoid a split in the network (yeah technically 50% + 1 hash is sufficient but it would be a disaster).

Fees are essentially 0.  The few satoshis paid in fees per block are a rounding error.  I doubt many miners will be supporting raising the block size any time soon especially w/ the subsidy being cut in half.

Still it is a totally non-issue.  Block size is 500 KB.  Average tx is ~ 500 bytes.  So the current block size is good for ~180K daily tx.  We are a small fraction of that.  If (due to economic pressure) some of the spam (satoshi dice, miner's taking 2 bitcent payouts, etc) was reduced we likely wouldn't even be 2K tx.
sr. member
Activity: 461
Merit: 251
May 15, 2012, 05:31:37 PM
#5
Just thinking aloud here...

I'm inclined to agree with gmaxwell that an off-blockchain transaction infrastructure is the answer.  Seems like it would be much cheaper, and more convenient and private/anonymous, anyway.  And with multisig/P2SH, it seems like it could be very secure against operators running off with the bitcoins people have bailed onto the tx servers.

OTOH, if this infrastructure isn't available when the block size limit is bumped up against and transactions start getting delayed and expensive, I doubt developers will be able to resist demands to increase the limit.

If it's not ready in time, could we ever revert back when it is, or would there be kind of a ratchet effect to this?
administrator
Activity: 5222
Merit: 13032
May 15, 2012, 10:29:34 AM
#4
Quote from: d'aniel
Do the developers all agree for the sake of decentralization to keep this a priority, enforced with a block size limit?

No. IIRC Mike Hearn supports moving most nodes to SPV. My impression was that Satoshi also expected most nodes to use SPV. Not sure about the opinions of other developers besides gmaxwell.
full member
Activity: 166
Merit: 100
May 15, 2012, 10:23:53 AM
#3
Rewrite the old parts of the blockchain so all transactions that happened more than 24 months ago are condensed so that they total to the same amount but merge somewhat. Then slap a warning saying they may be inaccurate.

ex.
guy 1 gives guy 2 5 btc, followed by guy 2 giving guy 3 5 btc
this becomes guy 1 giving guy 3 5 btc, removing guy 2 from that section entirely.
sr. member
Activity: 336
Merit: 250
May 15, 2012, 08:36:34 AM
#2
following
sr. member
Activity: 461
Merit: 251
May 15, 2012, 07:13:54 AM
#1
I'm wondering:

  • Is there a plan yet for when we start bumping up against the block size limit?
  • Is it going to be held where it is, pushing fees up and txs to occur off the blockchain on, e.g. Open Transactions servers, as gmaxwell suggested here: https://bitcointalk.org/index.php?PHPSESSID=c5c394d3434101e2874d5cadff6221a6&topic=80435.msg898723#msg89872?
  • Or will it be raised somewhat after some scalability optimizations are implemented?
  • If so, how high can be raised while still allowing the average PC to run a full node?
  • Do the developers all agree for the sake of decentralization to keep this a priority, enforced with a block size limit?
  • If so, why are people spending so much time developing lightweight bitcoin clients instead of working on, e.g. OT, if average people are going to be priced out of blockchain txs anyway?

Thanks for any clarification!
Jump to: