Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 26. (Read 71613 times)

legendary
Activity: 1120
Merit: 1164
RE: lots of code to write if you can't keep up with transaction volume:  sure.  So?

Well, one big objection is the code required is very similar to that required by fidelity-bonded bank/ledger implementations, but unlike the fidelity stuff, because it's consensus screwing it up creates problems that are far more difficult to fix and far more widespread in scale.


Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

"This mining this is crazy, like all that work when you could just verify a transaction's signatures, and I dunno, ask a bunch of trusted people if the transaction existed?"

So, why do we give miners transaction fees anyway? Well, they are providing a service of "mining a block", but the real service they are providing is the service of being independent from other miners, and we value that because we don't want >50%  of the hashing power to be controlled by any one entity.

When you say these small miners are inefficient, you're completely ignoring what we actually want miners to do, and that is to provide independent hashing power. The small miners are the most efficient at providing this service, not the least.

The big issue is the cost to be a miner comes in two forms, hashing power and overhead. The former is what makes the network secure. The latter is a necessary evil, and costs the same for every independent miner. Fortunately with 1MiB blocks the overhead is low enough that individual miners can profitably mine on P2Pool, but with 1GiB blocks P2Pool mining just won't be profitable. We already have 50% of the hashing power controlled by about three or four pools - if running a pool requires thousands of dollars worth of equipment the situation will get even worse.

Of course, we've also been focusing a lot on miners, when the same issue applies to relay nodes too. Preventing DoS attacks on the flood-fill network is going to be a lot harder when when most nodes can't verify blocks fast enough to know if a transaction is valid or not, and hence the limited resource of priority or fees is being expended by broadcasting it. Yet if the "solution" is fewer relay nodes, you've broken the key security assumption that information is easy to spread and difficult to stifle.

All in the name of vague worries about "too much centralization."

Until Bitcoin has undergone a serious attack we just aren't going to have a firm idea of what's "too much centralization"
legendary
Activity: 1652
Merit: 2311
Chief Scientist
RE: lots of code to write if you can't keep up with transaction volume:  sure.  So?

Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

All in the name of vague worries about "too much centralization."
cjp
full member
Activity: 210
Merit: 124
I think we need to have a block size limit. My original objection against removing the block size limit was that, as the number of new coins per block drops to zero, mining incentive will also drop to zero, if you have nothing to keep transaction fee above zero (transaction capacity has to be "scarce"). The OP showed an entirely new way how things can go wrong if there is no block size limit.

I don't see how making the block size limit "auto-adjustable" is different in this respect from having no block size limit at all.

In my opinion, the future block size limit can be very high, to allow for very high (but not unlimited) transaction volume. But it has to be low enough to prevent all the problems related to unlimited block sizes.

See the paper I presented in this thread: https://bitcointalksearch.org/topic/combining-bitcoin-and-the-ripple-fast-scalable-decentralized-and-more-94674. In chapter 3, it contains some estimations about scalability of different concepts. I mention it here, because it contains some estimates about the number of transactions needed for different technologies, when used worldwide for all transactions. When assuming 2 transactions pppd for 10^10 people, these are some conclusions:
  • normal Bitcoin system: 1e8 transactions/block
  • when my proposed system is widely used: 1e5 transactions/block
That should give you an idea of how high the block size limit should be. Maybe it should even be a bit lower, to increase scarcity a bit, and for the current level of technology, to allow normal-PC users to verify the entire block chain. For comparison: the current limit is around 1e3 transactions/block.

So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees.  Maybe they will max it out to force out miners on slow networks.  Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).
I'd like to see that too, since it's IMHO such an important piece of Bitcoin, and I'd rather have it tested now than when the whole world starts using Bitcoin; after successful halving of the block reward, this is the next big step.

I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.

I think the users want more than that, at least in the current Bitcoin community. Bitcoins most unique characteristics come from its decentralized nature; if you lose that, everything else is in danger. If you just want low fees and fast confirmation, Bitcoin is not the right technology: it would be far more efficient to have a couple of centralized debit card issuers who issue properly secured cards without chargeback: every transaction only needs to be verified and stored once or twice, so there would be almost no costs (and hence almost no transaction fees) and confirmation would be near-instantaneous.
legendary
Activity: 1120
Merit: 1164
Half-baked thoughts on the O(N) problem:

So, we've got O(T) transactions that have to get verified.

And, right now, we've got O(P) full nodes on the network that verify every single transaction.

So, we get N verifications, where N = T*P.

The observation is that if both T and P increase at the same rate, that formula is O(N^2).

... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."

Really?

If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?

I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.


Well you'll have to implement the fraud proofs stuff d'aniel talked about and I later expanded on. You'll also need a DHT so you can retrieve arbitrary transactions. Both require a heck of a lot of code to be written, working UTXO for fraud proofs in particular; random transaction verification is quite useless without the ability to tell everyone else that the block is invalid.

Things get ugly though... block validation isn't deterministic anymore: I can have one tx out of a million invalid, yet it still makes the whole block invalid. You better hope someone is in fact running a full-block validator and the fraud proof mechanism is working well or it might take a whole bunch of blocks before you find out about the invalid one with random sampling. The whole fraud proofs implementation is also now part of the consensus problem; that's a lot of code to get right.

In addition partial validation still doesn't solve the problem that you don't know which tx's in your mempool are safe to include in the next block unless you know which ones were spent by the previous block. Mining becomes a game of odds, and the UTXO tree proposals don't help.  A UTXO bloom filter might, but you'll have to be very careful that it isn't subject to chosen key attacks. Transaction volume itself leads to centralization too, simply by ensuring that only a miner able to keep up with the large volume of low-fee transactions can make a profit.

I've already thought of your idea, and I'm sure gmaxwell has too... our imagination didn't "run out"
full member
Activity: 154
Merit: 100
Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.

The fact that miners include transactions at all is a great example of how small  the block limit is. Right now the risk of orphans due to slow propagation is low enough that the difference between a 1KiB block and a 250KiB block is so inconsequential that pools just run the reference client code and don't bother tweaking it.
I don't think that was the point Mike was making. Rather, the cost of computing the hash of a block is directly proportional to the size of the block, so doubling the blocksize is like halving the hashrate for a miner. Thus, while rewards for finding blocks are large compared to fees, it is more profitable for a miner to mine a block as small as possible because his effective hashrate increases and he is more likely to find blocks.

What this says about miners (or really, pool operators) is that either:
 - they're too lazy to change the code
 - they're not arseholes / aren't purely motivated by short term profit
 - they realise that by mining empty blocks, the usability of bitcoin will reduce, hence the market price, hence their profits
legendary
Activity: 1120
Merit: 1164
I agree with Gavin, and I don't understand what outcome you're arguing for.

You want to keep the block size limit so Dave can mine off a GPRS connection forever? Why should I care about Dave? The other miners will make larger blocks than he can handle and he'll have to stop mining and switch to an SPV client. Sucks to be him.

I primarily want to keep the limit fixed so we don't have a perverse incentive. Ensuring that everyone can audit the network properly is secondarily.

If there was consensus to, say, raise the limit to 100MiB that's something I could be convinced of. But only if raising the limit is not something that happens automatically under miner control, nor if the limit is going to just be raised year after year.

Your belief we have to have some hard cap on the N in O(N) doesn't ring true to me. Demand for transactions isn't actually infinite. There is some point at which Bitcoin may only grow very slowly if at all (and is outpaced by hardware improvements).

Yes, there will likely only be around 10 billion people on the planet, but that's a hell of a lot of transactions. At one transaction per person per day you've got 115,700 transactions per second. Sorry, but there are lots of reasons to think Moore's law is coming to an end, and in any case the issue I'm most worried about is network scaling, and network scaling doesn't even follow Moore's law.

Making design decisions assuming technology is going to keep getting exponentially better is a huge risk when transistors are already only a few orders of magnitude away from being single atoms.

Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.

The fact that miners include transactions at all is a great example of how small  the block limit is. Right now the risk of orphans due to slow propagation is low enough that the difference between a 1KiB block and a 250KiB block is so inconsequential that pools just run the reference client code and don't bother tweaking it. I wouldn't be the slightest bit surprised to be told that there aren't any pools with even a single full-time employee, so why would I expect people to really put in the effort to optimize revenue, when it'll probably lead to a bunch of angry forum posts and miners leaving because they think the pool will damage Bitcoin?

I don't personally have any interest in working on a system that boils down to a complicated and expensive replacement for wire transfers. And I suspect many other developers, including Gavin, don't either. If Gavin decides to lift the cap, I guess you and Gregory could create a separate alt-coin that has hard block size caps  and see how things play out over the long term.

I don't have any interest in working on a system that boils down to a complicated and expensive replacement for PayPal.

Decentralization is the fundamental thing that makes Bitcoin special.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Half-baked thoughts on the O(N) problem:

So, we've got O(T) transactions that have to get verified.

And, right now, we've got O(P) full nodes on the network that verify every single transaction.

So, we get N verifications, where N = T*P.

The observation is that if both T and P increase at the same rate, that formula is O(N^2).

... and at this point your (and gmaxwell's) imagination seems to run out, and you throw up your hands and say "We Must Limit Either T or P."

Really?

If we have 20,000 full nodes on the network, do we really need every transaction to be verified 20,000 separate times?

I think as T and P increase it'd be OK if full nodes with limited CPU power or bandwidth decide to only fetch and validate a random subset of transactions.
legendary
Activity: 1526
Merit: 1134
I agree with Gavin, and I don't understand what outcome you're arguing for.

You want to keep the block size limit so Dave can mine off a GPRS connection forever? Why should I care about Dave? The other miners will make larger blocks than he can handle and he'll have to stop mining and switch to an SPV client. Sucks to be him.

Your belief we have to have some hard cap on the N in O(N) doesn't ring true to me. Demand for transactions isn't actually infinite. There is some point at which Bitcoin may only grow very slowly if at all (and is outpaced by hardware improvements).

Likewise, miners have all kinds of perverse incentives in theory that don't seem to happen in practice. Like, why do miners include any transactions at all? They can minimize their costs by not doing so. Yet, transactions confirm. You really can't prove anything about miners behaviour, just guess at what some of them might do.

I don't personally have any interest in working on a system that boils down to a complicated and expensive replacement for wire transfers. And I suspect many other developers, including Gavin, don't either. If Gavin decides to lift the cap, I guess you and Gregory could create a separate alt-coin that has hard block size caps  and see how things play out over the long term.
legendary
Activity: 1120
Merit: 1164
So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.

I agree with Stephen Pair-- THAT would be a highly centralized system.

A "highly centralized" system where anyone can get a transaction confirmed by paying the appropriate fee? A fee that would be about $20 (1) for a typical transaction even if $10 million a day, or $3.65 billion a year, goes to miners keeping the network secure for everyone?

I'd be very happy to be able to wire money anywhere in the world, completely free from central control, for only $20. Equally I'll happily accept more centralized methods to transfer money when I'm just buying a chocolate bar.


1) $10,000,000/144blocks = $69,440/block
     / 1MiB/block = $69.44/KiB

A two-in, two-out transaction with compressed keys is about 300 bytes, thus $20.35 per transaction.

So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees.  Maybe they will max it out to force out miners on slow networks.  Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).

That sounds like a whole lot of "maybe" I agree that we need to move cautiously, but fundamentally I've shown why a purely profit driven miner has an incentive to create blocks large enough to push other miners out of the game and gmaxwell has made the point that a purely profit driven miner has no incentive not to add an additional transaction to a block if the transaction fee is greater than the cost in terms of decreased block propagation leading to orphans. The two problems are complementary in that decreased block propagation actually increases revenues up to a point, and the effect is most significant for the largest miners. Unless someone can come up with a clear reason why gmaxwell and myself are both wrong, I think we've shown pretty clearly that floating blocksize limits will lead to centralization.

Variance already has caused the number of pools out there to be fairly limited; we really don't want more incentives for pools to get larger.

I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.

They want something impossible from an O(n) system without making it centralized. We've already got lots of centralized systems - creating another one doesn't do the world any good. We've only got one major decentralized payment system, Bitcoin, and I want to keep it that way. Users can always use centralized systems for low-value transactions, and if block sizes are limited they'll even be able to very effectively audit the on-chain transactions produced by those centralized systems. Large blocks does not let you do that.

Ultimately, the problem is the huge amount of expensive infrastructure built around the assumption that transactions are nearly free. Businesses make decisions based on what will happen at most 3-5 years in the future, so naturally the likes of Mt. Gox, BitInstance, Satoshidice and others have every reason to want the block size limit to be lifted. It'll save them money now, even if it will lead to a centralized Bitcoin five or ten years down the road.
legendary
Activity: 1120
Merit: 1164
Wouldn't already a valid header (or even just the hash of that header) be enough to start mining at least an empty block?

Yes, but an empty block doesn't earn you any revenue as the block reward drops, so mining is still pointless. You still need the full block to know what transactions were mined, and thus what transactions in the mempool are safe to include in the block you want to attempt to mine.

Additionally without the full block, you don't know if the header is valid, so you are vulnerable to miners feeding you invalid blocks. Of course, someone has to create those invalid block hashes, but the large miners are the only ones who can validate them, so if smaller miners respond by taking risks like mining blocks even when they don't know what transactions have already been mined, the larger miners can run lots of nodes to find those blocks as fast as possible, and distribute them to other small miners without the equipment to validate the blocks.

Also if you produce blocks large and fast enough to drive someone out of mining you'd also drive a lot more full clients off the network.

Sure, but all the scenarios where extremely large blocks are allowed are also assuming that most people only run mini-SPV clients at best; if one of the smaller "full-node transaction feed" services gets put off-line, your customers, that is transaction creators, will just move to a larger service for their tx feed.

Miners already have quite high incentives to DDoS (or otherwise break) all other pools that they are not part of, no matter the block size. I think there are more effective, less disruptive for users and cheaper ways of driving competing miners off the grid than a bandwidth war.

Yes, but DoSing nodes by launching DoS attacks is illegal. DoSing full-nodes by just making larger blocks isn't. For the largest miner/full-node service the cost of launching such an attack is zero, they've already paid for the extra hardware capacity, so why not use it to it's full advantage? So what if doing so causes %5 of the network to drop out.

The most dangerous part of this scenario is that you don't need miners to even act maliciously for it to happen.  The miner with the largest investment in fixed costs, network capacity and CPU power, has a profit motive to use that expensive capacity to the fullest extent possible. The fact that doing so happens to push the miner with the smallest investment in fixed costs off of the network, furthering the largest's profits due to mining, is inevitable. Furthermore the process is guaranteed to happen again, because the largest miner has no reason not to take those further mining profits and invest in yet more network capacity and CPU power.

Again, remember that those fixed costs do not make the network more secure. A 51% attacker doesn't care about valid transactions at all; they're trying to mine blocks that don't have the transactions that the main network does, so they don't need to spend any money on their network connection.

Every cent that miners spend on internet connections and fast computers because they need to process huge blocks is money that could have gone towards securing the network with hashing power, but didn't.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
So...  I start from "more transactions == more success"

I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.

I agree with Stephen Pair-- THAT would be a highly centralized system.

Oh, sure, mining might be decentralized.  But who cares if you either have to be a gazillionaire to participate directly on the network as an ordinary transaction-creating customer, or have to have your transactions processed via some centralized, trusted, off-the-chain transaction processing service?

So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen (maybe miners will collectively decide to keep the block size low, so they get more fees.  Maybe they will max it out to force out miners on slow networks.  Maybe they will keep it small so their blocks relay through slow connections faster (maybe there will be a significant fraction of mining power listening for new blocks behind tor, but blasting out new blocks not via tor)).


I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.
legendary
Activity: 2618
Merit: 1007
Wouldn't already a valid header (or even just the hash of that header) be enough to start mining at least an empty block?
Also if you produce blocks large and fast enough to drive someone out of mining you'd also drive a lot more full clients off the network.

Miners already have quite high incentives to DDoS (or otherwise break) all other pools that they are not part of, no matter the block size. I think there are more effective, less disruptive for users and cheaper ways of driving competing miners off the grid than a bandwidth war.
legendary
Activity: 1120
Merit: 1164
This is a re-post of a message I sent to the bitcoin-dev mailing list. There has been a lot of talk lately about raising the block size limit, and I fear very few people understand the perverse incentives miners have with regard to blocks large enough that not all of the network can process them, in particular the way these incentives inevitably lead towards centralization. I wrote the below in terms of block size, but the idea applies equally to ideas like Gavin's maximum block validation time concept. Either way miners, especially the largest miners, make the most profit when the blocks they produce are large enough that less than 100%, but more than 50%, of the network can process them.



Quote
One of the beauties of bitcoin is that the miners have a very strong incentive to distribute as widely and as quickly as possible the blocks they find...they also have a very strong incentive to hear about the blocks that others find.

The idea that miners have a strong incentive to distribute blocks as widely and as quickly as possible is a serious misconception. The optimal situation for a miner is if they can guarantee their blocks would reach just over 50% of the overall hashing power, but no more. The reason is orphans.

Here's an example that makes this clear: suppose Alice, Bob, Charlie and David are the only Bitcoin miners, and each of them has exactly the same amount of hashing power. We will also assume that every block they mine is exactly the same size, 1MiB. However, Alice and Bob both have pretty fast internet connections, 2MiB/s and 1MiB/s respectively. Charlie isn't so lucky, he's on an average internet connection for the US, 0.25MiB/second. Finally David lives in country with a failing currency, and his local government is trying to ban Bitcoin, so he has to mine behind Tor and can only reliably transfer 50KiB/second.

Now the transactions themselves aren't a problem, 1MiB/10minutes is just 1.8KiB/second average. However, what happens when someone finds a block?

So Alice finds one, and with her 1MiB/second connection she simultaneously transfers her new found block to her three peers. She has enough bandwidth that she can do all three at once, so Bob has it in 1 second, Charlie 4 seconds, and finally David in 20 seconds. The thing is, David has effectively spent that 20 seconds doing nothing. Even if he found a new block in that time he wouldn't be able to upload it to his other peers fast enough to beat Alice's block. In addition, there was also a probabalistic time window before Alice found her block, where even if David found a block, he couldn't get it to the majority of hashing power fast enough to matter. Basically we can say David spent about 30 seconds doing nothing, and thus his effective hash power is now down by 5%


However, it gets worse. Lets say a rolling average mechanism to determine maximum block sizes has been implemented, and since demand is high enough that every block is at the maximum, the rolling average lets the blocks get bigger. Lets say we're now at 10MiB blocks. Average transaction volume is now 18KiB/second, so David just has 32KiB/second left, and a 1MiB block takes 5.3 minutes to download. Including the time window when David finds a new block but can't upload it he's down to doing useful mining a bit over 3 minutes/block on average.

Alice on the other hand now has 15% less competition, so she's actually clearly benefited from the fact that her blocks can't propegate quickly to 100% of the installed hashing power.


Now I know you are going to complain that this is BS because obviously we don't need to actually transmit the full block; everyone already has the transactions so you just need to transfer the tx hashes, roughly a 10x  reduction in bandwidth. But it doesn't change the fundamental principle: instead of David being pushed off-line at 10MiB blocks, he'll be pushed off-line at 100MiB blocks. Either way, the incentives are to create blocks so large that they only reliably propagate to a bit over 50% of the hashing power, *not* 100%

Of course, who's to say Alice and Bob are mining blocks full of transactions known to the network anyway? Right now the block reward is still high, and tx fees low. If there isn't actually 10MiB/second of transactions on the network it still makes sense for them to pad their blocks to that size anyway to force David out of the mining business. They would gain from the reduced hashing power, and get the tx fees he would have collected. Finally since there are now just three miners, for Alice and Bob whether or not their blocks ever get to Charlie is now totally irrelevant; they have every reason to make their blocks even
bigger.

Would this happen in the real world? With pools chances are people would quit mining solo or via P2Pool and switch to central pools. Then as the block sizes get large enough they would quit pools with higher stale rates in preference for pools with lower ones, and eventually the pools with lower stale rates would probably wind up clustering geographically so that the cost of the high-bandwidth internet connections between them would be cheaper. Already miners are very sensitive to orphan rates, and will switch pools because of small differences in that rate.

Ultimately the reality is miners have very, very perverse incentives when it comes to block size. If you assume malice, these perverse incentives lead to nasty outcomes, and even if you don't assume malice, for pool operators the natural effects of the cycle of slightly reduced profitability leading to less ability invest in and maintain fast network connections, leading to more orphans, less miners, and finally further reduced profitability due to higher overhead will inevitably lead to centralization of mining capacity.
Pages:
Jump to: