Pages:
Author

Topic: Why does there need to be a limit on amount of transactions? (Read 3739 times)

sr. member
Activity: 352
Merit: 250
https://www.realitykeys.com
Personally I think the solution is more than one coin.

Bitcoins for storing value long term and bigger payments.

Litecoins for day to day transactions and purchases under $100.  

If nearly all the actual commerce was happening in Litecoins, and Litecoins had solved the scaling problems that Bitcoin hadn't, it's not really clear that people would still want to hold Bitcoins.

The normal thing to do when you run into serious scaling problems is to shard. Sharding is always a big PITA, so you don't do it until you absolutely have to. (Half the payment has confirmed, but the other half hasn't yet, because you're spending coins from two different shards...)

But right now Bitcoin is scaling fine. It's not radically decentralized, and hasn't been for a long time, but that ship sailed when people started mining on expensive, dedicated hardware rather than using the spare CPU cycles on boxes they were running anyway. Capital-intensive commodity businesses have huge economies of scale. They are never cottage industries. You can't run a profitable steelworks in your garage. We may not like these economic facts, but tinkering with irrelevant things like the block size isn't going to change them.
hero member
Activity: 1036
Merit: 500
Personally I think the solution is more than one coin.

Bitcoins for storing value long term and bigger payments.

Litecoins for day to day transactions and purchases under $100. 
legendary
Activity: 1106
Merit: 1004
Except that the block size issue is not akin to free market dynamics.

In a free market, when someone makes a purchase, the only parties directly affected are the purchaser and the seller.

In the blockchain world, when someone publishes a 1GB block, the price is paid by every full node while the reward is only
collected by the publisher. This dynamic is closer to private profit, public loss.

Dude, this a repeated scenario in economic theory. The "fear" that free markets cannot internalize costs when there's the possibility of a tragedy of the commons. It's similar to security in the meatspace.
And guess what? It's perfectly possible to eliminate the tragedy of the commons risk through spontaneous order, as long as property rights are established and respected.

And btw, private profit and public loss only happens when the state comes into the scene.

There is much incentive for a well connected miner to publish a large block for 3 reasons
i) He and only he gets more txn fees. The larger the block, the more revenue he gets.

Nagato, if there are real transactions paying large fees to get included, this represents real demand. Miners better attend it or Bitcoin jams!

And as I explained twice on this thread already, the risk of hitting some soft limits would make miners be prudent on this. They would only increase their blocks when there's enough consensus, or when the demand is so strong that it's worth the risks. In both cases we are fine.

P2Pool has an insignificant share of hashing power even though miners get to keep 100% of all earnings vs mining pools which take a cut or txn fees.

Why?
Because the cost of running a full node outweighs the the revenue loss from mining with a pool.

Please, I and many others run a full node without getting nothing in return.

AFAIK P2Pool is not very popular because it allegedly has large stale rates. I don't know if this claim is factual.

Personally i think keeping the Bitcoin protocol decentralised to be much more important than keeping its direct transactional capabilities decentralised.

Both things will always be the case, if you remove the hardcoded constant limit.

Ideally, the community takes the middle ground and increases the block size slowly to keep pace with bandwidth increases.

But that's precisely what I'm saying! Block size should be controlled by everybody, with their choices and plannings, not by a centrally imposed formula.
full member
Activity: 150
Merit: 100
Except that the block size issue is not akin to free market dynamics.

In a free market, when someone makes a purchase, the only parties directly affected are the purchaser and the seller.

In the blockchain world, when someone publishes a 1GB block, the price is paid by every full node while the reward is only
collected by the publisher. This dynamic is closer to private profit, public loss.

There is much incentive for a well connected miner to publish a large block for 3 reasons
i) He and only he gets more txn fees. The larger the block, the more revenue he gets.
ii) Price for large block is paid by everyone (read: my competition today and tomorrow)
iii) In the event the huge block is not orphaned, he gets a headstart to mine the next block while the rest of the network sits idle trying to download his block. This leads to him having a higher probability than his actual hashing power to find the next block(go to i).

Now lets look at empirical evidence.

P2Pool has an insignificant share of hashing power even though miners get to keep 100% of all earnings vs mining pools which take a cut or txn fees.

Why?
Because the cost of running a full node outweighs the the revenue loss from mining with a pool.
And this is with the average block being <250KB.

There is no question that mining with a pool brings down mining cost because you don't need a fat pipe and a decent PC with a large HD/RAM to run bitcoind.
Decentralised systems have redundancies which leads to much higher costs when compared with centralised systems and the free market may eventually force centralisation in Bitcoin as well. The question is whether we want Bitcoin to centralise towards off-chain solutions for smaller transactions(low blocksize with many miners) or centralise the currency itself to allow on-chain txns for everyone(big blocksize with few large miners).

Personally i think keeping the Bitcoin protocol decentralised to be much more important than keeping its direct transactional capabilities decentralised. However i understand that there is a risk that people will just start using off-chain solutions as money just as paper notes were used instead of gold.

Ideally, the community takes the middle ground and increases the block size slowly to keep pace with bandwidth increases. There is probably an inflection point for bandwidth where blocks will be large enough for the global population, but we are not there yet.
legendary
Activity: 1106
Merit: 1004
but to counterbalance people who appear to seriously want to dump the limit altogether in one go, and hope things will just work out, with no empirical evidence beyond thought experiments and forum debates.

Sigh...
It's not a "hope", it's an aprioristic certainty. You don't need central planning to avoid "nefarious market cartelization", just study economics if you don't believe me.

Talking Bitcoin specifics: it's easy to spot an attempt of spamming by another miner. Its blocks will contain a large percentage of unknown transactions. So, just create soft limits to censor blocks with many unknown transactions. Say, if a block contains more than 10% of unknown transactions, don't build on top of it unless it's already 1 block deep. If it contains >20%, wait for it to be 2 blocks deep etc. Obviously the percentages and depths should be configurable.
You can also add such limits on the block size itself. Larger than X, wait for Y depth at least. Multiple (X,Y) configurable pairs.
Oh, and as a bonus, such soft limits would create an incentive for miners to relay and broadcast transactions they receive - today they have no incentive other than the counter-incentive of having to patch the software. If they keep transactions they receive from SPV nodes for themselves, they might get their block orphaned.

It's quite visible that miners would only slightly raise their limits when they believe the gains from adding more paying transactions would outcome the potential losses from orphanage. That's spontaneous regulation, transaction space supply adapting to its demand.
And it's absurd to claim that a remote, unclear chance of kicking out very low bandwidth miners would be so attractive as to make large bandwidth miners take the risk of losing their entire block on orphanage.

Please, people, being p2p is the greatest feature of Bitcoin. P2P is all about spontaneous order - an actual verifiable fact, not a mere "hope". How can you claim to support the first and largest p2p currency and yet be against spontaneous order?

legendary
Activity: 1400
Merit: 1009
The people advocating keeping the limit low are generally optimistic on how great it might be to hit it (let the off-chain solutions bloom!), and also the opinion that we're going to hit a mountain sooner or later anyway so it may as well be sooner. They may even turn out to be right. But they're bold, not cautious. The cautious move is not to hit the mountain.
They also tend to drastically underestimate exponential adoption rates. We don't have a few years before we hit the mountain. The right time to start addressing this was last year.
sr. member
Activity: 352
Merit: 250
https://www.realitykeys.com
Given that expanding Bitcoin is somewhat like conducting maintenance on an aircraft in flight it might be worthwhile to move cautiously.    

This.  Anyone who has worked in the aerospace/airline industry on mission critical systems knows that you don't make changes on gut feelings and optimism.  When people's lives are at stake, you are extremely conservative, assume the worst, triple check, and try to get 100% unanimity on any risky assessment.

The "hard 1MB" position perhaps has some adherents not necessarily out of irrational conservatism or ideological extremism, but to counterbalance people who appear to seriously want to dump the limit altogether in one go, and hope things will just work out, with no empirical evidence beyond thought experiments and forum debates.

In spite of the usual disclaimers, Bitcoin is no longer toy money.  Its stored value is now larger than the GDP of many countries.  Lots of simulation, hard data, and stress testing will be needed before prudent people will go along with changes that risk unintended consequences on a $1B+ economy.



I think most people would agree with that, but it's worth saying here that keeping the ceiling where it is as we get ever closer to it isn't necessarily the cautious move.

To stick with the aircraft analogy, we're flying at 1000 meters towards a 1500-meter-high mountain. If we do nothing, we hit the mountain. There may be some legitimate concerns about whether the plane will be OK at higher altitude, but there are also serious concerns about what happens when the plane hits the mountain. The fact that we've been flying along happily at this altitude over open sea for quite a while before we actually got to the mountain doesn't mean that everything will be OK at the same altitude when we do hit it.

Specifically, we've always had reasonably low transaction fees. We have no idea what would happen to the economy with high transaction fees, and there are some very plausible worst-cases, like Bitcoin losing in the marketplace to an alt-coin or another technology, and most of that $1B+ going up in smoke. The people advocating keeping the limit low are generally optimistic on how great it might be to hit it (we'll land softly in a blossoming forest of off-chain solutions!), and also of the opinion that we're going to hit a mountain sooner or later anyway so it may as well be sooner. They may even turn out to be right. But they're bold, not cautious. The cautious move is not to hit the mountain.
hero member
Activity: 588
Merit: 500
Given that expanding Bitcoin is somewhat like conducting maintenance on an aircraft in flight it might be worthwhile to move cautiously.    

This.  Anyone who has worked in the aerospace/airline industry on mission critical systems knows that you don't make changes on gut feelings and optimism.  When people's lives are at stake, you are extremely conservative, assume the worst, triple check, and try to get 100% unanimity on any risky assessment.

The "hard 1MB" position perhaps has some adherents not necessarily out of irrational conservatism or ideological extremism, but to counterbalance people who appear to seriously want to dump the limit altogether in one go, and hope things will just work out, with no empirical evidence beyond thought experiments and forum debates.

In spite of the usual disclaimers, Bitcoin is no longer toy money.  Its stored value is now larger than the GDP of many countries.  Lots of simulation, hard data, and stress testing will be needed before prudent people will go along with changes that risk unintended consequences on a $1B+ economy.

sr. member
Activity: 461
Merit: 251
Yes there are ways to optimize traffic however they aren't implemented yet so TODAY those limits hold.  Also transmitting a hash (64 bytes) vs transmitting a tx (~400 bytes on average) while smaller isn't an order of magnitude smaller.  So take whatever realistic limit exists based on available resources and even hyper optimized the same limit still exists at 5x that transaction limit.
Transaction hashes are 32 bytes, but even only the first few bytes are enough to identify the tx in the mempool.  So it's actually about two orders of magnitude less data than sending full txs.
donator
Activity: 1218
Merit: 1079
Gerald Davis
Yes there are ways to optimize traffic however they aren't implemented yet so TODAY those limits hold.  Also transmitting a hash (64 bytes) vs transmitting a tx (~400 bytes on average) while smaller isn't an order of magnitude smaller.  So take whatever realistic limit exists based on available resources and even hyper optimized the same limit still exists at 5x that transaction limit.

Don't get hung up on the details.  The point is as tx volume increases bandwidth is the tightest bottleneck.  As nodes can't or won't handle that the number of full nodes will decline.  Higher transaction volume when it breaches what is "reasonable" (and yes there is some gray area on what is reasonable) to the average node will result in a centralization as a result of the costs to run a full node.  Optimizations help but you can't get blood from a stone.   Nobody is going to be able to run a full node on the AVERAGE residential connection with a transaction speed of say 5,000 tps.


Quote
Just take a look at the amount of competitors that show up in places where banking regulations are less burdensome, like Panama, and compare it with other places (relatively to the country's population and GDP sizes)
There are far more bitcoin nodes today than banks in Panama.  Regulation or not when the burden on a node rises there will be less nodes.  That is a form of centralization.  An open unlimited network "may" reach a good compromise or it may not and this is a billion dollar project.  Their are no "oops" guess it didn't lets hit the reset button.  Given that expanding Bitcoin is somewhat like conducting maintenance on an aircraft in flight it might be worthwhile to move cautiously.   
legendary
Activity: 1106
Merit: 1004
Agreed higher bandwidth connections will be more common in the future however if 1% of potential users have a 1 Gbbps connection and that becomes the minimum then you have reduced the potential full nodes to <1% of the planet.  The numbers also aren't as rosy as they seem on first glance.  A node by definition needs connections to multiple peers so a node connected to 8 peers will rebroadcast a tx it receives to 7 peers.  Now 8 is the minimum for network security we really want a huge number of nodes with high levels of connectivity (20, 30, 500+ connections).  So lets look at 20.
...

Come on, D&T... I know that you know that a node should only need to broadcast a tx to all his peers if he's the very first to receive and validate it. Nodes can first send a "I have this new tx" message, which is small (tx hash size), and then upload the tx to the peers that requested it. Not all of your peers will request it from you - they're connected to other nodes too.

I used the amount 10 in a conservative way... I don't think a node would upload the same transaction 10 times in average, it seems a high number to me.

But it'd be interesting to see statistics on how many times a node has to upload a tx, proportionally to its amount of connections. I never saw any.

The last issue is what Nagato mentions above (although his numbers are low due to the need to broadcast to multiple peers).  

I've already answered Nagato above. (and I know that you knew that too...)

BTW despite the post I am bullish on Bitcoin, solutions can be found however those advocating dropping all limits because of "centralization" need to realize at the far extreme it just leads to another form of centralization.  When only a tiny number of players can afford the cost of running a mining pool (and 1, 10, 50 Gbps low latency connectivity) or run a full node you have turned the p2p network into a group of a few hundred highly connected peers.

I'm confident that spontaneous order can easily tackle block size control. Miners can implement soft limits, not only on block size per se, but also on the percentage of unknown transactions in a block as I said above (normally you should have most transactions of the new block in your pool, if you don't, it might represent a spamming attempt).
Just look at miners today: they're already extra-conservative, only to ensure the fastest possible propagation.

Guess what modern banking IS a peer to peer network of a few hundred highly connected peers.   The fact that you can't be a peer on the interbank networks doesn't mean the network doesn't exist.  The barriers (legal, regulatory, and cost) just prevent you from becoming a peer.

Banking is an industry in symbiosis with the state. The problem with it are the regulations: that's the barrier of entry that makes it so hard to hop in. The cost of the business per se shouldn't be that high. Taking care of people's money (which is mostly digital today) has no reason to be a more costly business than a complex factory for instance.
Just take a look at the amount of competitors that show up in places where banking regulations are less burdensome, like Panama, and compare it with other places (relatively to the country's population and GDP sizes)
legendary
Activity: 1106
Merit: 1004
What many people don't realise is that the bandwidth numbers quoted on the wiki and by you only apply to keep up with the block generation rate. An independant miner will need 100x - 1000x more bandwidth to mine at all.

1 MB block size produced ONCE every 10 minutes NOT over 10 minutes
If im a miner, i want to download that new block as fast as possible to reduce my idle time.
Lets use 1% idle time as your target(Means your entire mining farm sits idle for ~6s while you download the block)
...

That's not the case. If you were online since before that block started to be built, you already received all its transactions. They're all on your transaction pool. There's no actual need to download them again (that's a performance improvement suggested by the scalability page by the way).
To start mining on the next block, all you need is the header of the previous, and a sorted list of transaction hashes to build the Merkle tree. That's much less data then the entire block.

Unless of course the block contains lots of transactions that are not on the memory pool, in that case you'll have to download these unknown transactions.

And there you have it: an easy way to detect if a spamming attempt is in progress. If a sensible amount of transactions in the new block was not present on your memory pool, you should consider that block a spamming attempt by a miner and refuse to mine on top of it, unless of course it's already more than x blocks deep, in which case you accept it (soft limits).
If the spamming miner decides to broadcast his spamming transactions, he'd hit anti-spam fee policies, and end up needing to pay other miners in the network to include its spam.

Just to clarify im not opposed to an increase in block size as long as decentralisation is not compromised by ensuring that the block size remains small enough for average residential broadband connections/commodity PCs to mine with.

Mostly everybody agrees with that. The argument is between those that think that an arbitrary formula should be invented and imposed via the protocol, and those who believe that spontaneous order (or p2p, free-market, freedom, pick your term) can implement a better and safer control on block size without the use of a centralized formula. Well, there's also a third group that thinks the 1Mb limit should be kept, but I can't take them seriously...
Not only I believe spontaneous order would reach better results, I also agree with D&T when he says that setting a formula is technically (and pollitically) complicated, and potentially error-prone (might introduce bugs).
donator
Activity: 1218
Merit: 1079
Gerald Davis
I guess it depends on what you mean by micro transactions.  I mean look at the tempest in a teacup about setting the default dust limit at 5430 satoshis (~0.5 cents).   If you mean <$0.10 (in 2013 USDs) then probably not.  If you $0.10 to a couple bucks then it likely will be some time before those transactions are not economically viable. 

BTW I am not opposed to solving the "block size" problem I am just pointing out that the situation is slightly more complex then some make it out to be.  There is always some level of centralization, an unlimited blockchain simply pushes us towards a different kind of centralization.

How many tx the network eventually happens, what the costs will be, how big Bitcoin gets are all unknown.  My guess is likely no better than anyone elses.  I do believe a alt-coin built from ground up around low cost microtransactions could carve out a niche.  I also think off blockchain transactions aren't that scary. I don't like the idea of web wallets holding massive wealth but I could really careless of the security implications of using off blockchain transactions to buy a cup of coffee or some discount steam game.
sr. member
Activity: 294
Merit: 250
This bull will try to shake you off. Hold tight!
Thank you DeathAndTaxes and all the others for your technical explanations.

Do you believe that bitcoin will continue to be used for microtransactions as it is today? Or do you think this will fade out over time due to technical limitations?

If you believe it will fade out, do you think another cryptocurrency will be used for microtransactions or do you think a top layer build on bitcoin with off chain transactions, will win out, for microtransactions?
donator
Activity: 1218
Merit: 1079
Gerald Davis
Let's talk bandwidth then... It seems people in Kansas City already have 1Gbit/s available in their homes, up and down. Assuming the 400bytes average for bitcoin transaction that I read somewhere, that's more than 300Ktps if I'm not mistaken. That's a shitload of transactions. Even if transaction sizes were to multiple by 3 due to more usage of multi-signature features (something that I hope will happen), that would still be more than 100Ktps. What's the average number of times a full node has to upload the same transaction? It shouldn't be much, due to the high connectivity of the network. But even if you have to upload the same transaction 10 times, Google Fiber would probably allow you to handle more transactions than Visa and Mastercard combined! We're obviously not hitting such numbers anytime soon. Until there, there might be much more than 1Gbit/s available for residential links.

Agreed higher bandwidth connections will be more common in the future however if 1% of potential users have a 1 Gbbps connection and that becomes the minimum then you have reduced the potential full nodes to <1% of the planet.  The numbers also aren't as rosy as they seem on first glance.  A node by definition needs connections to multiple peers so a node connected to 8 peers will rebroadcast a tx it receives to 7 peers.  Now 8 is the minimum for network security we really want a huge number of nodes with high levels of connectivity (20, 30, 500+ connections).  So lets look at 20.  1 Gbps / (400 bytes per tx * 19 peers to relay * 8 bits per byte) = ~16,000 tps.  Now 16,000 tps per second is still a huge number.  However that would limit full node participation to those with 1 Gbps.  However the real problem is real bandwidth vs marketing.

1 Gbps sounds great until you saturation your uplink at 1 Gbps 24/7 continually every second.  In no time flat the ISP is going to cut you off or throttle you.   Even if they have no hard bandwidth caps all ISP agreements have "reasonable usage" guidelines.  Residential bandwidth is shared.  No company could offer (even at cost) 1 Gbps for $100 per month if every user (or even a small minority) actually maxed it out.  If you want real pricing take a look at what most datacenters charge for bandwidth.  1 Gbps is going to cost a LOT more than $100 per month, probably more than $1,000 per month (although the cost does get cut in half every 12-18 months).

The last issue is what Nagato mentions above (although his numbers are low due to the need to broadcast to multiple peers).  For miners their outgoing bandwidth is "bursty".  A miner needs to broadcast his found block very quickly to as much of the network as possible.  Every 6 seconds in delay increases the orphan rate by ~ 1%.  If targeting a 3 second window to send a 10 MB block to 50 connected peers in 3 seconds we are looking at 10 MB * 8 bits per byte * 50 peers / 3 seconds = ~1,300 Mbps.   Lower connectivity will put the miner at a disadvantage to better connected miners.  If this barrier is too high you will see even more migration to the largest pools as they can afford the high levels of connectivity needed.  Slower pools will essentially have a 1% to 3% or more "hidden" oprhan tax.  As miners discover that they will migrate to the better paying pools.  

Quote
All these desperate attempts to hold the block limit become ridiculous when we look at the numbers.

When the average user has "true" 1 Gbps connectivity at a reasonable cost and the average miner can obtain "true" 10 Gbps connectivity then maybe.  BTW despite the post I am bullish on Bitcoin, solutions can be found however those advocating dropping all limits because of "centralization" need to realize at the far extreme it just leads to another form of centralization.  When only a tiny number of players can afford the cost of running a mining pool (and 1, 10, 50 Gbps low latency connectivity) or run a full node you have turned the p2p network into a group of a few hundred highly connected peers.   Guess what modern banking IS a peer to peer network of a few hundred highly connected peers.   The fact that you can't be a peer on the interbank networks doesn't mean the network doesn't exist.  The barriers (legal, regulatory, and cost) just prevent you from becoming a peer.

Quote
The greatest issue for new users is having to wait for the initial sync. If the client were to operate as an SPV in the meanwhile, and switching to full mode once initial sync is complete, I guess many more people would be OK with having a full node. Well, some would complain about how slow their computer got after they've installed this bitcoin-thing, and might be turned off. But not that much as today.

Today maybe.  But lets look at just a 10MB block for a node with only 8 peers (dangerously low IMHO).  That requires about 64 Mbps sustained.  Due to the bursty nature for this peer to provide any value relaying blocks the peak bandwidth would need to be 10x higher (640Mbps).  The larger obstacle isn't sustained or peak speeds (more than acheivable from a technical standpoint).  The larger obstacle is how much burden it would put on ISP networks (which are generally massively oversubscribed).  Total bandwidth used for this peer is ~350 GB per month.   Most ISP will cap a user long before that.  The biggest ISP, comcast, IIRC starts throttling at ~200 GB per month (less on cheaper plans).  The first time a casual user either has his download speeds cut 80% or gets a warning from his ISP on having to pay overage fees he likely is going to pull the plug.  Maybe not every user but at least some users will.
legendary
Activity: 1400
Merit: 1009
1 MB block size produced ONCE every 10 minutes NOT over 10 minutes
This could be solved by pre-announcing blocks: As soon as a miner decides on a list of transactions to include in a block the start broadcasting this list in parallel with hashing, and then broadcast the nonce once they've found it.
full member
Activity: 150
Merit: 100
Just to clarify im not opposed to an increase in block size as long as decentralisation is not compromised by ensuring that the block size remains small enough for average residential broadband connections/commodity PCs to mine with.
full member
Activity: 150
Merit: 100
Let's talk bandwidth then... It seems people in Kansas City already have 1Gbit/s available in their homes, up and down. Assuming the 400bytes average for bitcoin transaction that I read somewhere, that's more than 300Ktps if I'm not mistaken. That's a shitload of transactions. Even if transaction sizes were to multiple by 3 due to more usage of multi-signature features (something that I hope will happen), that would still be more than 100Ktps. What's the average number of times a full node has to upload the same transaction? It shouldn't be much, due to the high connectivity of the network. But even if you have to upload the same transaction 10 times, Google Fiber would probably allow you to handle more transactions than Visa and Mastercard combined! We're obviously not hitting such numbers anytime soon. Until there, there might be much more than 1Gbit/s available for residential links.

All these desperate attempts to hold the block limit become ridiculous when we look at the numbers.

What many people don't realise is that the bandwidth numbers quoted on the wiki and by you only apply to keep up with the block generation rate. An independant miner will need 100x - 1000x more bandwidth to mine at all.

1 MB block size produced ONCE every 10 minutes NOT over 10 minutes
If im a miner, i want to download that new block as fast as possible to reduce my idle time.
Lets use 1% idle time as your target(Means your entire mining farm sits idle for ~6s while you download the block)
To download 1MB over 6s, you need about 1.7MBPS connection (seems reasonable for most people in developed countries)
10MB block size, 17MBps (Even i do not have a 17MBPs connection at home though it is affordable enough if i need it)
100MB block size, 170MBPs (Most countries are atleast 5-10 years away from having affordable fibre internet)

And that is assuming 1% is the market determined edge you can afford to lose to remain profitable.
legendary
Activity: 1106
Merit: 1004
The bottleneck is more in this order (from most critical to least critical):
bandwidth (for residential connections the upload segment tends to be rather constrained)
memory (to quickly validate txs & blocks the UXTO needs to be kept in memory, sure pulls from disk are possible and ... panfully slow)
storage (as much as people worry about storage it is a free market unlike residential last mile and HDD capacities already have a massive "headstart")
cpu (with moore's law I don't see this ever being a problem but as pointed out non CPU solutions are possible)

I agree with your bottleneck order. Bandwidth will probably be the first, particularly with SSDs getting cheaper (you can store your UXTO in a SSD for better I/O performance). CPU can be dramatically improved as you say. Storage is not such a big deal. And if memory becomes a big deal, good caching strategies together with SSDs could circumvent it.

Let's talk bandwidth then... It seems people in Kansas City already have 1Gbit/s available in their homes, up and down. Assuming the 400bytes average for bitcoin transaction that I read somewhere, that's more than 300Ktps if I'm not mistaken. That's a shitload of transactions. Even if transaction sizes were to multiple by 3 due to more usage of multi-signature features (something that I hope will happen), that would still be more than 100Ktps. What's the average number of times a full node has to upload the same transaction? It shouldn't be much, due to the high connectivity of the network. But even if you have to upload the same transaction 10 times, Google Fiber would probably allow you to handle more transactions than Visa and Mastercard combined! We're obviously not hitting such numbers anytime soon. Until there, there might be much more than 1Gbit/s available for residential links.

All these desperate attempts to hold the block limit become ridiculous when we look at the numbers.

The average Joe hasn't even started using Bitcoin today.  The requirements of a full node will only increase.  Today users are already encouraging new/casual users towards lite nodes and eWallets.  That trend will only accelerate. 

I'm not sure. The greatest issue for new users is having to wait for the initial sync. If the client were to operate as an SPV in the meanwhile, and switching to full mode once initial sync is complete, I guess many more people would be OK with having a full node. Well, some would complain about how slow their computer got after they've installed this bitcoin-thing, and might be turned off. But not that much as today.
donator
Activity: 1218
Merit: 1079
Gerald Davis
The average Joe is not running a qt client.

Then what are they using? I think most people google for bitcoin and end up on bitcoin.org where they see this:

The average Joe hasn't even started using Bitcoin today.  The requirements of a full node will only increase.  Today users are already encouraging new/casual users towards lite nodes and eWallets.  That trend will only accelerate.  The demands on a full node are certain manageable but there will always be a cost in being an equal peer in a global transaction network.  Many users will opt out of that cost by using lighter solutions.
Pages:
Jump to: