Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.
Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.
For consumer products where you get a tangible object in return. Security through hashing power is nothing like kickstarter.
1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.
No, that's 1.2MiB average; you need well above that to keep your orphan rate down.
Again, you're making assumptions about the hardware available in the future, and big assumptions. And again you are making it impossible to run a Bitcoin node in huge swaths of the world, not to mention behind Tor.
You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.
How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.
I'm assuming 1% of transactions per month get added to the UTXO set. With cheap transactions increased UTXO set consumption for trivial purposes, like satoshidice's stupid failed bet messaging and timestamping, is made more likely so I suspect 1% is reasonable.
Again, other than making old UTXO's eventually become unspendable, I don't see any good solutions to UTXO growth.
All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.
Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.
I mean proof of work hashing for mining. If you don't know what transactions were spent by the previous block, you can't safely create the next block without accidentally including a transaction spent by the previous one, and thus invalidating your block.
You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.
Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.
Don't be silly. Even in 1993 people knew that you would be able to do things like have DNS servers return different IP's each time - Netscape's 1994 homepage used
hard-coded client-side load-balancing implemented in the browser for instance.
DNS is another good example: the original hand-maintained hosts.txt file was unscalable, and sure enough it was replaced by a the hierarchical and scalable DNS system in the mid 80's.
Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.
...and what do you know, one of the arguments for IPv6 even back in the early days in the early 90's was the IPv4 routing space wasn't very hierarchical and would lead to scaling problems for routers down the line. The solution implemented has been to use various technological and administrative measures to keep top-level table growth in control. In 2001 there were 100,000 entries, and 12 years later in 2013 there are 400,000 - nearly linear growth. Fortunately the nature of the global routing table is that linear top-level growth can support quadratic and more growth in the number of underlying nodes; getting access to the internet does not contribute to the scaling problem of the routing table.
On the other hand, getting provider-independent address space, a resource that does increase the burden on the global routing table, gets harder and harder every year. Like Bitcoin it's an O(n^2) scaling problem, and sure enough the solution followed has been to keep n as low as possible.
The way the internet has actually scaled is more like what I'm proposing with fidelity-bonded chaum banks: some number of n banks, each using up some number of transactions per month, but in turn supporting a much larger number m of clients. The scaling problem is solved hierarchically, and thus becomes tractable.
Heck, while we're playing this game, find me a single major O(n^2) internet scaling problem that's actually been solved by "just throwing more hardware at it", because I sure can't.
I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.
Appeal to authority. Satoshi also didn't make the core and the GUI separate, among many, many other mistakes and oversights, so I'm not exactly convinced I should assume that just because he thought Bitcoin could scale it actually can.