Right. The plan as I understand it is that miners will announce the blocks with a list of tx, and then the rest of the network (who already have most of these tx in their memory pool) will assemble the blocks to store in the local blockchains themselves.
That reduces the size of blocks (for propagation purposes only) by 90%, making inclusion of a particular tx only about 10% as expensive in orphan costs as it is now. It has no effect on the size of the stored blockchain, unfortunately.
And it was a straight-up waste of bandwidth to fix: as matters are every client is downloading every transaction twice (once when it's made and once when it's included in a block) so something like 40% of bandwidth is wasted. Kudos to Gavin for working on fixing that, but he can only fix it once.
It's not worth it to them to include tx above that limit without fees of $3.30 per tx according to Gavin's figures.
Woah, that's new. Source? From what I understood, miners will have to take the fees they are given, and if the fees don't pay their bills, shut down until difficulty decreases to make it worthwhile to mine again.
3.3 millibitcoins per kilobyte:
https://bitcointalksearch.org/topic/m.3648359 He estimates that it ought to be easy to get a factor of 10 or 20 by optimizing the protocol. But that won't change the way it scales. That will only change the ratio of scale to performance. And we need more than one order of magnitude in performance to get where we need to be, so that's critical.
This is the cost to them of the increased chance of losing the 25BTC mining award. Reduce the award by half, and that reduces their lost opportunity by half. As Moonshadow said, it's based on a lot of assumptions - but the assumptions amount to "a typical miner today."
Changing the scaling means redesigning. If bandwidth, storage, and compute requirements grow linearly at *EVERY* client with the growth of the network, the scaling will fail. And with every node checking every transaction, that's what's happening now. We need a different design where the growth of bandwidth, storage, and compute requirements at each client are either constant, or grow at a sublinear rate like logN, with growth of the network. Right now that is fundamentally impossible unless bitcoin changes its operation completely.
So he's working on a plan to reduce block size by 90% which ought to reduce the miners disincentive to include tx in a block. That would reduce the risk of an orphaned block by 90% but it wouldn't change the tx per second limit.
Regarding the 7 per second limit, from what I understand, that is there ONLY to keep hard drive storage requirements in check. Reducing the size of a block by a factor of 10 should allow to increase transaction limit by a factor of 10, too.
Nope. The size of the blockchain on local storage would remain the same. We're only talking about eliminating some wasted bandwidth in the protocol.
The 7TPS limit *is* there mainly to keep people from abusing the blockchain for data storage and transport layers for other protocols. But they're doing it anyway. Further, the 7tps limit cannot just be "switched off" allowing scaling to VISA levels. Lifting that limit involves changing the blocksize, changing the rate at which the blockchain grows eating local storage and bandwidth, and convincing miners that it won't reduce their profits to include these transactions in blocks.
The block chain is already a barrier to entry for running a full node. It's huge. Downloading it takes a week with the current protocol. You might get lucky and get a high bandwidth peer to download from but most people don't.
I know they are working on two parallel blockchains, one consisting of headers, the other of actual blocks, so a new full node will essentially start working the same way as Multibit, being ready for use within a minute or two once it downloads the headers, and then downloading the rest of the blockchain in the background.
Yes, and that's a good idea too. It enables people to check the blockchain from most recent back, rather than from the genesis block forward. It doesn't change the size of the locally stored blockchain, nor the bandwidth it requires to download it. It greatly enhances convenience but doesn't address the way the underlying limits limits scale.
Even better, right now you are forced to download and verify blocks sequentially, one by one, but with the new system, you will download the tiny-by-comparisoon headers, verify them, and then download the blocks themselves asynchronously from many sources at the same time like a torrent file, making it download MUCH faster (likely in an hour or two).
All true, and all good, and will make a much faster, better-behaved client. And so will pruning transactions after all their txout are spent. But unless we can get it down to where it's under the curve of Moore's Law, it's going to continue getting harder instead of easier.
In the long term, though, I agree, full nodes will be rare specialty, same as mining.
And in the long term, I think that both mining and running a full node needs to be easy and provide no-one any reason to not do it.