Pages:
Author

Topic: How a floating blocksize limit inevitably leads towards centralization - page 25. (Read 71590 times)

legendary
Activity: 1064
Merit: 1001
You have to carefully choose the goal you want to achieve with the "auto-adjusting"

Here's a starting point.

A periodic block size adjustment should occur that:

- Maintains some scarcity to maintain transaction fees
- But not too much scarcity (i.e. 1MB limit forever)
- Doesn't incentivize miners to game the system

full member
Activity: 154
Merit: 100
But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out?
If you want the worst case then consider this:

Some set of miners decide, as Peter suggests, to increase the blocksize in order to reduce competition. Thinking longterm, they decide that a little money lost now is worth the rewards of controlling a large portion of the mining of the network.
1) The miners create thousands of addresses and send funds between them as spam (this is the initial cost)
  a) optional - add enough transaction fee so that legitimate users get upset about tx fees increasing and call for blocksize increases
2) The number of transactions is now much higher than the blocksize allows, forcing the auto-adjust to increase blocksize
3) while competition still exists, goto step 1
4) Continue sending these spam transactions to maintain high blocksize. Added bonus, as the transaction fee you pay is to yourself - i.e. the transaction is free!
5) Profit!
legendary
Activity: 1078
Merit: 1003
legendary
Activity: 1078
Merit: 1003
Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

Off topic:
It's interesting you say that, I'm guessing you don't think there was a need for a Bitcoin Foundation then do you?
legendary
Activity: 1078
Merit: 1003
In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".

It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.

If we talking about centralization we should focus on Mt. Gox but that's a different story.

It is a technical decision, but technical decision about how to provide both scalability and security. And like it or not, decentralization is part of the security equation and must be taken in account when changing anything that would diminish it.
cjp
full member
Activity: 210
Merit: 124
It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.

I disagree. Any decision that has political consequences is a political decision, whether you deny/ignore it or not. I even doubt whether technical, non-political decisions actually exist. You develop technology with a certain goal in mind, and the higher goal is usually of a political nature. So, when you propose a decision, please explicitly list the (political) goals you want to achieve, and all the expected (desired and undesired) (political) side-effects of your proposal. That way, the community might come to an informed consensus about your decision.

Regarding the transaction limit, I see the following effects (any of which can be chosen as goals / "antigoals"):
  • Increasing/removing the limit can lead to centralization of mining, as described by the OP (competition elimination by bandwidth)
  • Increasing/removing the limit can lead to reduced security of the network (no transaction scarcity->fee=almost zero->difficulty collapse->easy 51% attack and other attacks). I think this was a mistake of Satoshi, but it can be solved by keeping a reasonable transaction limit (or keep increasing beyond 21M coins, but that would be even less popular in the community).
  • Centralization of mining can lead to control over mining (e.g. 51% attack, but also refusal to include certain transactions, based on arbitrary policies, possibly enforced by governments on the few remaining miners)
  • Increasing/removing the limit allows transaction volume to increase
  • Increasing/removing the limit allows transaction fees to remain low
  • Increasing/removing the limit increases hardware requirements of full nodes

Also: +100 for Pieter Wuille's post. It's all about community consensus. And my estimate is that 60MiB/block should be sufficient for worldwide usage, if my Ripple-like system becomes successful (otherwise it would have to be 1000 times more). I'd agree with a final limit of 100MiB, but right now that seems way too much, considering current Internet speeds and storage capacity. So I think we need to increase it at least 2 times.
legendary
Activity: 1072
Merit: 1181
First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.

However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).

I think retep raises very good points here: the block size (whether voluntarily or enforced) needs to result in a system that remains verifiable for many. What those many are will probably change gradually. Over time, more and more users will probably move to SPV nodes (or more centralized things like e-wallet sites), and that is fine. But if we give up the ability for non-megacorp entities to be able to verify the chain, we might as well be using those a central clearinghouse. There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.

Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".

Then I think you misunderstand what a hard fork entails. The only way a hard fork can succeed is when _everyone_ agrees to it. Developers, miners, merchants, users, ... everyone. A hard fork that succeeds is the ultimate proof that Bitcoin as a whole is a consensus of its users (and not just a consensus of miners, who are only given authority to decide upon the order of otherwise valid transactions).

Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?
legendary
Activity: 1120
Merit: 1152
Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

For consumer products where you get a tangible object in return. Security through hashing power is nothing like kickstarter.

1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.

No, that's 1.2MiB average; you need well above that to keep your orphan rate down.

Again, you're making assumptions about the hardware available in the future, and big assumptions. And again you are making it impossible to run a Bitcoin node in huge swaths of the world, not to mention behind Tor.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.

How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.

I'm assuming 1% of transactions per month get added to the UTXO set. With cheap transactions increased UTXO set consumption for trivial purposes, like satoshidice's stupid failed bet messaging and timestamping, is made more likely so I suspect 1% is reasonable.

Again, other than making old UTXO's eventually become unspendable, I don't see any good solutions to UTXO growth.

All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.

Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.

I mean proof of work hashing for mining. If you don't know what transactions were spent by the previous block, you can't safely create the next block without accidentally including a transaction spent by the previous one, and thus invalidating your block.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.

Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.

Don't be silly. Even in 1993 people knew that you would be able to do things like have DNS servers return different IP's each time - Netscape's 1994 homepage used hard-coded client-side load-balancing implemented in the browser for instance.

DNS is another good example: the original hand-maintained hosts.txt file was unscalable, and sure enough it was replaced by a the hierarchical and scalable DNS system in the mid 80's.

Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.

...and what do you know, one of the arguments for IPv6 even back in the early days in the early 90's was the IPv4 routing space wasn't very hierarchical and would lead to scaling problems for routers down the line. The solution implemented has been to use various technological and administrative measures to keep top-level table growth in control. In 2001 there were 100,000 entries, and 12 years later in 2013 there are 400,000 - nearly linear growth. Fortunately the nature of the global routing table is that linear top-level growth can support quadratic and more growth in the number of underlying nodes; getting access to the internet does not contribute to the scaling problem of the routing table.

On the other hand, getting provider-independent address space, a resource that does increase the burden on the global routing table, gets harder and harder every year. Like Bitcoin it's an O(n^2) scaling problem, and sure enough the solution followed has been to keep n as low as possible.

The way the internet has actually scaled is more like what I'm proposing with fidelity-bonded chaum banks: some number of n banks, each using up some number of transactions per month, but in turn supporting a much larger number m of clients. The scaling problem is solved hierarchically, and thus becomes tractable.

Heck, while we're playing this game, find me a single major O(n^2) internet scaling problem that's actually been solved by "just throwing more hardware at it", because I sure can't.

I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.

Appeal to authority. Satoshi also didn't make the core and the GUI separate, among many, many other mistakes and oversights, so I'm not exactly convinced I should assume that just because he thought Bitcoin could scale it actually can.
full member
Activity: 154
Merit: 100
It seems like the requirements for full verification, and those for mining are being conflated. Either way, I see the following solutions.

If you need to run a full verification node, then, as Mike points out, you can rent the hardware to do it along with enough bandwidth. If full verification becomes too much work for a general purpose computer, then we'll begin to see 'node-in-a-box' set-ups where the network stuff is managed by an embedded processor and the computation farmed out to an FPGA/ASIC. Alternatively, we'll see distributed nodes among groups of friends, where each verifies some predetermined subset. This way, you don't have to worry about random sampling. You can then even tell the upstream nodes from you to only send the transactions which are within your subset, so your individual bandwidth is reduced to 1/#friends.

If you want to be a miner, you can run a modified node on rented hardware as above, which simply gives you a small number of transactions to mine on, rather than having to sort them out yourself. This way, you can reduce your bandwidth to practically nothing - you'd get a new list each time a block is mined.
legendary
Activity: 1526
Merit: 1134
Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem.

1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.

3T per month of transfer is again, not a big deal. For a whopping $75 per month bitvps.com will rent you a machine that has 5TB of bandwidth quota per month and 100mbit connectivity.

Lots of people can afford this. But by the time Bitcoin gets to that level of traffic, if it ever does, it might cost more like $75 a year.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.

How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.

All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.

Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.

Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.

Following your line of thinking, there should have been some way to ensure only the elite got to use the web. Otherwise how would it work? As it got too popular all the best websites would get overloaded and fall over. Disaster.

Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.

I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.
legendary
Activity: 1792
Merit: 1059
I think we should put users first. What do users want? They want low transaction fees and fast confirmations.

This comes down to Bitcoin as a payment network versus Bitcoin as a store of value. I thought it was already determined that there will always be better payment networks that function as alternatives to Bitcoin. A user who cares about the store of value use-case, is going to want the network hash rate to be as high as possible. This is at odds with low transaction fees and fast confirmations.

People invest because others do the same. Money follows money. The larger the user base, the higher the value of the bitcoin. This has nothing to do with the hash rate. Nevertheless, the hash rate will be gigantic.
full member
Activity: 154
Merit: 100
That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon.
Seems they've read your mind https://en.bitcoin.it/wiki/Hardfork_Wishlist  Wink.
legendary
Activity: 1106
Merit: 1004
Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".

Also, having fewer participants in a market because these participants are good enough to keep aspiring competitors at bay is not a bad thing. The problem arises when barriers of entry are artificial (legal, bureaucratic etc), not when they're part of the business itself. Barriers of entry as part of the business means that the current market's participants are so advanced that everybody else wanting to enter will have to get at least as good as the current participants for a start.

Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon.
legendary
Activity: 1064
Merit: 1001
I think we should put users first. What do users want? They want low transaction fees and fast confirmations.

This comes down to Bitcoin as a payment network versus Bitcoin as a store of value. I thought it was already determined that there will always be better payment networks that function as alternatives to Bitcoin. A user who cares about the store of value use-case, is going to want the network hash rate to be as high as possible. This is at odds with low transaction fees and fast confirmations.
legendary
Activity: 1792
Merit: 1059
In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".

It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.

If we talking about centralization we should focus on Mt. Gox but that's a different story.
legendary
Activity: 1120
Merit: 1152
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.

But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem. You're 100x the volume of PayPal is 4000 transactions a second, or about 1.2MiB/second, and you'll want to be able to burst quite a bit higher than that to keep your orphan rate down when new blocks come in. Like it or not that's well beyond what most internet connections in most of the world can handle, both in sustained speed and in quota. (that's 3TiB/month) Again, P2Pool will look a heck of a lot less attractive.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory. That's an ugly, ugly requirement - after all if a block has n transactions, your average access time per transaction must be limited to 10minutes/n to even just keep up.

EDIT: also, it occurs me me that one of the worst things about the UTXO set is the continually increasing overhead it implies. You'll probably be lucky if cost/op/s scales by even something as good as log(n) due to physical limits, so you'll gradually be adding more and more expensive constantly on-line hardware for less and less value. All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing. In addition your determinism goes down because inevitably the UTXO set will be striped across multiple storage devices, so at worst every tx turns out to be behind one low-bandwidth connection. God help you if an attacker figures out a way to find the worst sub-set to pick. UTXO proofs can help a bit - a transaction would include it's own proof that it is in the UTXO set for each txin - but that's a lot of big scary changes with consensus-sensitive implications.

Again, keeping blocks small means that scaling mistakes, like the stuff Sergio keeps on finding, are far less likely to turn into major problems.

The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted. Meanwhile unlike Wikipedia Bitcoin requires global shared state that must be visible to, and mutable by, every client. Comparing the two ignores some really basic computer science that was very well understood even when the early internet was created in the 70's.
cjp
full member
Activity: 210
Merit: 124
I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.

I don't think the most fundamental debate is about how high the limit should be. I made some estimates about how high it would have to be for worldwide usage, which is quite a wild guess, and I suppose any estimation about what is achievable with either today's or tomorrow's technology is also a wild guess. We can only hope that what is needed and what is possible will somehow continue to match.

But the most fundamental debate is about whether it is dangerous to (effectively) disable the limit. These are some ways to effectively disable the limit:
  • actually disabling it
  • making it "auto-adjusting" (so it can increase indefinitely)
  • making it so high that it won't ever be reached

I think the current limit will have to be increased at some point in time, requiring a "fork". I can imagine you don't want to set the new value too low, because that would make you have to do another fork in the future. Since it's hard to know what's the right value, I can imagine you want to develop an "auto-adjusting" system, similar to how the difficulty is "auto-adjusting". However, if you don't do this extremely carefully, you could end up effectively disabling the limit, with all the potential dangers discussed here.

You have to carefully choose the goal you want to achieve with the "auto-adjusting", and you have to carefully choose the way you measure your "goal variable", so that your system can control it towards the desired value (similar to how difficulty adjustments steers towards 10minutes/block).

One "goal variable" would be the number of independent miners (a measure of decentralization). How to measure it? Maybe you can offer miners a reward for being "non-independent"? If they accept that reward, they prove non-independence of their different mining activities (e.g. different blocks mined by them); the reward should be larger than the profits they could get from further centralizing Bitcoin. This is just a vague idea; naturally it should be thought out extremely carefully before even thinking of implementing this.
legendary
Activity: 1792
Merit: 1059
So, as I've said before:  we're running up against the artificial 250K block size limit now, I would like to see what happens. There are lots of moving pieces here, so I don't think ANYBODY really knows what will happen

A rational approach.

I think we should put users first. What do users want? They want low transaction fees and fast confirmations. Lets design for that case, because THE USERS are who ultimately give Bitcoin value.

I agree. I'm a user. :-)
legendary
Activity: 1526
Merit: 1134
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.

I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.

Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.

But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.
cjp
full member
Activity: 210
Merit: 124
I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

All in the name of vague worries about "too much centralization."

It's interesting, and a bit worrying too, to see the same ideological differences of the "real" world come back in the world of Bitcoin.

In my view, the free market is a good, but inherently instable system. Economies of scale and network effects advantage large parties, so large parties can get larger and small parties will disappear, until only one or just a few parties are left. You see this in nearly all markets nowadays. Power also speeds up this process: more powerful parties can eliminate less powerful parties; less powerful parties can only survive if they subject themselves to more powerful parties, so the effect is that power tends to centralize.

For the anarchists among us: this is why we have governments. It's not because people once thought it was a good idea, it's because that happens to be the natural outcome of the mechanisms that work in society.

In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".
Pages:
Jump to: