Pages:
Author

Topic: proposal: delete blocks older than 1000 (Read 2660 times)

sr. member
Activity: 405
Merit: 255
@_vjy
July 14, 2013, 11:51:09 PM
#29
First or oldest coins must be valued differently, as they are somehow special. I'd like to buy bitcoins mined from the first 10 blocks, for as much as 1 bit cent for 1 BTC.

May be such markets not opened up yet.. Smiley
legendary
Activity: 1400
Merit: 1009
Any proposed actions need to be connected to solving actual problems (or at least ones that are reasonably and justifiably anticipated).   What you're suggesting— to the extent that its even concrete enough to talk about the benefits or costs—, would likely _decrease_ the scaling over the current and/or most obvious designs by at least constant factor, and more probably a constant plus a logarithmic factor. Worse, it would move costs from storage, which appears to have the best scaling 'law', to bandwidth which has the worst empirical scaling.
I have pretty modest aspirations for Bitcoin: I just want it to be as successful in the currency world as TCP/IP has been in the networking world; i.e. I'm looking forward to a future in which there are no Bitcoin currency exchanges because there are no longer any currencies to exchange with.

The reason I like the distributed filesystem approach is that the storage requirements of a universal currency are going to be immense, and loosening the requirement that any node maintain a full copy of everything at the same time makes it easier to solve. Freenet has a self-assembling datastore where each node specializes in terms of what keys they store, and while it doesn't guarantee that keys will be be retrievable forever, it does a good job in practise (subject to certain caveats).

That makes it a good starting point to design a system that could scale up to the kind of storage system Bitcoin would need a decade from now if it's still on the road to becoming a universal currency.

On the other hand there's no guarantee that the Dollar and the Euro are going to make it to 2025, so it's always possible that we'd need to scale up very quickly much sooner than anyone could anticipate. It certainly wouldn't be the first time for that to happen to Bitcoin.
legendary
Activity: 3920
Merit: 2349
Eadem mutata resurgo
I'm pretty sure that the NSA will keep a full copy of the blockchain forever ... and probably all traffic that was ever on the bitcoin network.

Maybe we can just ask them to keep the "good" copy always available for new client downloads when anyone need it?
staff
Activity: 4172
Merit: 8419
You just start up a node and it bootstraps and specializes without any user intervention at all. This is something that other distributed storage systems, like Tahoe-LAFS, don't have.
Sure, and nothing interesting or fancy is required for that to work.  Our blockchain space is _well defined_, not some effectively infinite sparse state space. The access patterns to it are also well defined:  All historical data is access with equal/flat small probability, and accessed sequentially. Recent blocks are accessed with an an approximately exponential decay.  Data needed to validate a new block or transaction is always available from the party that gave you that block or transaction.

So, a very nice load-balancing architecture falls right out of that.  Everyone keeps recent blocks with a exponentially distributed window size. Everyone selects a uniform random hunk of the history, size determined by their contributed storage and available bandwidth.  This should result in nearly optimal traffic distribution and is highly attack resistant in a way seriously stronger than freenet's node swapping and without the big bandwidth overheads of having to route traffic through many homes to pick up data thats ended up far from its correct specialization as IDs have drifted.

Quote
Nielsen's Law of Internet Bandwidth suggests that high end home broadband users will have 10 gbit/sec connections by 2025. Does it not make sense plan ahead?
Arguing "Does it not make sense to plan ahead" here sounds like some kind of cargo cult engineering:  "Planing ahead must be done. This is a plan. Then it must be done."

Any proposed actions need to be connected to solving actual problems (or at least ones that are reasonably and justifiably anticipated).   What you're suggesting— to the extent that its even concrete enough to talk about the benefits or costs—, would likely _decrease_ the scaling over the current and/or most obvious designs by at least constant factor, and more probably a constant plus a logarithmic factor. Worse, it would move costs from storage, which appears to have the best scaling 'law', to bandwidth which has the worst empirical scaling.

If you scale things based on the scaling laws your assuming there nothing further is required. If you strap on all the nice and pretty empirically observed exponential trends then everything all gets faster and everything automatically scales up no worse than the most limiting scaling factor (which has been bandwidth historically and looks like it will continue to be)—  assuming no worse than linear performance. There are already no worse than linear behavior in the Bitcoin protocol that I'm aware of. Any in the implementations are just that, and can be happily hammered out asynchronously over time. Given computers and bandwidth that are ~10e6 better (upto a factor of 4 or so in either direction), you can have your 10e6 transactions/s. Now— I'm skeptical that these exponential technology trends will hold even just in my lifetime. But assuming they don't, then that results in a ceiling in what you can do in a decentralized system that twiddling around the protocols can't save without tossing the security model/decentralization.

Maybe people will want to toss the decentralization of Bitcoin in order to scale it further than the technology supports. If so, I would find that regrettable, since if you want to do that you could just do it in an external system.  But until I'm the boss of everything I suspect some people will continue to do things I find regrettable from time to time— I don't, however, see much point in participating in discussions about those things, since I certainly won't be participating in them.
 
legendary
Activity: 1400
Merit: 1009
A Bitcoin-specific storage system
Yes, freenet's alpha and omega is its privacy model, but again, thats why its architecture teaches us relatively little of value for the Bitcoin ecosystem.[/quote]I disagree with that, because their privacy model required them to make everything work automatically. You just start up a node and it bootstraps and specializes without any user intervention at all. This is something that other distributed storage systems, like Tahoe-LAFS, don't have.

Why not specify 10e60000 transactions per second while you're making up random numbers?   Bitcoin is a decentralized system, thats its whole reason for existence.  There aren't non-linear costs that inhibit its scaling at least not in the system itself, just linear ones— but they're significant.  Positing 10e6 transaction per second directly inside Bitcoin is _ludicrous_ (you're talking about every full node needing to transfer 80 tbytes per day just to validate, with a peak data in excess of 10gbit/sec require to obtain reliable convergence) and not possible without completely abandoning decentralization— unless you also assume a comparable increase in bandwidth/computing power, in which case it's trivial. Or if you're willing to abandon decentralization, again it's trivial. Or if you move that volume into an external system— its a question of the design of that system and not Bitcoin.
Nielsen's Law of Internet Bandwidth suggests that high end home broadband users will have 10 gbit/sec connections by 2025. Does it not make sense plan ahead?
staff
Activity: 4172
Merit: 8419
A Bitcoin-specific storage system could do better than this, for example by dropping prunable transactions before unspent transactions.
Uh. The whole point of the discussion here is providing historic transactions in order to autonomously validate the state.

There is absolutely no reason to use a DHT like system to provide UTXO data: Even if you've chosen to make a storage/bandwidth trade-off where you do not store the UTXO yourself, you would simply fetch them from the party providing you with the transaction/block as they must already have that information in order to have to validated and/or produced the transaction.

In any case, this is serious overkill.
What kind of storage architecture will ultimately be needed if Bitcoin is going to scale as far as 106 transactions per second? Laying the groundwork for a network of that capacity is not overkill IMHO.
Why not specify 10e60000 transactions per second while you're making up random numbers?   Bitcoin is a decentralized system, thats its whole reason for existence.  There aren't non-linear costs that inhibit its scaling at least not in the system itself, just linear ones— but they're significant.  Positing 10e6 transaction per second directly inside Bitcoin is _ludicrous_ (you're talking about every full node needing to transfer 80 tbytes per day just to validate, with a peak data in excess of 10gbit/sec require to obtain reliable convergence) and not possible without completely abandoning decentralization— unless you also assume a comparable increase in bandwidth/computing power, in which case it's trivial. Or if you're willing to abandon decentralization, again it's trivial. Or if you move that volume into an external system— its a question of the design of that system and not Bitcoin.

Regardless: Achieving high scale by first dramatically _increasing_ the bandwidth required but interposing a multihop DHT in the middle— when generally bandwidth has been scaling much slower than computation and storage— isn't a good start.
legendary
Activity: 1400
Merit: 1009
The freenet model does not provide for reliability, however.
That's true. The cost of strong anonymity is that storage nodes are dealing with encrypted blobs whose contents they know nothing about, so they have to drop keys randomly when an individual node runs out of space.

A Bitcoin-specific storage system could do better than this, for example by dropping prunable transactions before unspent transactions.

In any case, this is serious overkill.
What kind of storage architecture will ultimately be needed if Bitcoin is going to scale as far as 106 transactions per second? Laying the groundwork for a network of that capacity is not overkill IMHO.
sr. member
Activity: 406
Merit: 251
http://altoidnerd.com
Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
We better make the speed of light higher so that optic fibers can allow much faster data transfers
I think we can achieve both of these by first making space-time riemannian instead of Pseudo-Riemannian. With euclidean space-time there should be no need for pesky limits like a constant speed of light, and the extra payload mass should be offset-able by simply moving some of the fuel you didn't need into the past.


ObOntopic:  While not all nodes need to constantly store the complete history— it is not so simple as waving some hands and saying "just keep X blocks": access to historical data is important to Bitcoin's security model. Otherwise miners could invent coins out of thin air or steal coins and later-attaching nodes would know nothing about it, and couldn't prevent.   There is a careful balancing of motivations here: part of the reason someone doesn't amass a bunch of computing power to attack the system is because of how little they can get away with if they try.


To achieve all aforementioned goals at one fell swoop, and then some, we should simply nullify the 2nd law of thermodynamics.  Without this pesky restriction, we would not be faced with mortality and therefore would feel no need to rush to any accomplishments at all.

Since we do have the second law, however, people will also tend to lose track of their wallets, ensuring that even if hoarding were somehow discouraged or even eliminated, there would be accidental unspent coins.
staff
Activity: 4172
Merit: 8419
The freenet model does not provide for reliability, however. My past experience was that individual keys in freenet were fairly unreliable, especially if they were just community of interest keys and not linked from the main directories. It also lacked any kind of sibyl resistance on opennet, and darknet has the unfortunate bootstrapping usability issues.  Perhaps things have changed in the last couple years? (If so— anyone have any citations I can read about whats changed in freenet land?).   An obvious proof of concept would be to insert the bitcoin blockchain as is and to provide a tool to fetch the blocks that way. If nothing else another alternative transport is always good.

In any case, this is serious overkill.  We already have an address rumoring mechanism that works quite well, and the protocol already handles fetching and validation in an attack resistant manner.  If we supplement it with additional fields on what range(s) nodes are willing to serve, with some authentication to prevent flag malleability that should be adequate for our needs from a protocol perspective... and considerably simpler than a multihop DHT.
legendary
Activity: 1400
Merit: 1009
The Freenet project has managed to create a 30+ terabyte distributed, redundant, content-addressed filesystem. It works in a plug-and-play fashion, where individual nodes specialize with regards to routing and as to which keys they choose to store automatically without requiring any explicit configuration. Nodes can enter and leave the network randomly without having much of an impact on key retrievablity.

Their design would be a good starting point for building a distributed datastore for Bitcoin. All the objects that we care about are already identified by their hashes, so it would be easy to adapt their storage model for Bitcoin.
staff
Activity: 4172
Merit: 8419
I think the proposal to use bittorrent for that is best.  A tracker could be started for the blockchain state every 10k blocks.
Our effort to use a trackerless torrent was a failure— it was basically unusable. So you'd be introducing a critical path point of failure both at the trackers and in the torrent files.  Beyond that, even with the infrequently updated bootstrap we only get a handful of seeds, I can't imagine _more_ frequent torrents improving that.  Now multiply that with the fact that bittorrent is another network facing protocol with similar software complexity to the whole of the Bitcoin reference software, and one with a less impressive security history, so anyone that participates in that is increasing their security critical surface area. Then exponentiate that by the fact that bittorrent is _forbidden_ on many networks both because massively parallel TCP connections are hostile to the network (use an unfair share of capacity) and the associations with illicit file transfer, and even where it isn't outright forbidden it often gets detected by IDS and can draw unwanted adverse attention. The association of IRC and Botnets (and the resulting blocking and confused "you've been hacked" warnings from network admins) is one of the reasons we removed the original IRC introduction method.

Providing the data necessary for the operation of the network is something the network ought to do. It provides a 1:1 mapping between interesting parties and available parties. It also makes some kind of attack resistance easier because Bitcoin software can actually validate the information without any centrally controlled official torrents, whereas using torrent would ignore all the special structure our data has that makes dos attacking difficult.

I believe that using bittorrent for this is unnecessary, that using it as a primary method (as opposed to a backup or alternative option) would be bad and dangerous.  Also, you're a bad bad person for proposing it, and you smell too. Tongue (no, not really. But I'm finding it a little difficult to state how throughly bad I think that idea is without being gratuitously insulting— it's not intended that way, and it's nothing personal. Perhaps some awkward humor will help? Smiley).
legendary
Activity: 1232
Merit: 1084
The reason this is not yet implemented, is because we first need to make some protocol changes to help new nodes that start up find peers to download the blockchain from. If a majority of nodes goes to delete their old blocks, they'd have a very hard time finding a peer that still has those blocks otherwise.

I think the proposal to use bittorrent for that is best.  A tracker could be started for the blockchain state every 10k blocks.

Does bittorrent do merkle-like hashing?  I sounds like it just is a list of hashes per piece?

The file could be defined to exclude all orphans.  This gives a consistent target for each tracker.

Since all blocks that are included in the snapshot have 10k confirms, it would be clear which blocks are orphans by the time the tracker is created.

You could add the sha256 hash of the blockchain from 0 to 229999 (excluding orphans) into the coinbase for block 235000.  To save coinbase space you could spread it out over multiple blocks.

This gives miners 5000 blocks to compute the next hash.
legendary
Activity: 1072
Merit: 1174
Deleting old blocks is perfectly possible - this was a design goal when the 0.8 database changes (called "ultraprune" for a reason, back then) were developed.

Old transactions are not affected, as the set of unspent transaction outputs is maintained separately from the block database. Their unspent outputs remain in the database, even if the block that contained the transaction itself isn't anymore.

The reason this is not yet implemented, is because we first need to make some protocol changes to help new nodes that start up find peers to download the blockchain from. If a majority of nodes goes to delete their old blocks, they'd have a very hard time finding a peer that still has those blocks otherwise.
legendary
Activity: 2142
Merit: 1009
Newbie
This was a very odd account hack...this guy made posts like this in my name all day yesterday...I hope no one was scammed.

bastard.

This reminded me Fight Club...
hero member
Activity: 511
Merit: 500
Hempire Loading...
This was a very odd account hack...this guy made posts like this in my name all day yesterday...I hope no one was scammed.

bastard.
legendary
Activity: 4060
Merit: 1303
I propose that we make P = NP. 

Oh wait, then we have a real problem.  :-)


Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
We better make the speed of light higher so that optic fibers can allow much faster data transfers
I think we can achieve both of these by first making space-time riemannian instead of Pseudo-Riemannian. With euclidean space-time there should be no need for pesky limits like a constant speed of light, and the extra payload mass should be offset-able by simply moving some of the fuel you didn't need into the past.

ObOntopic:  While not all nodes need to constantly store the complete history— it is not so simple as waving some hands and saying "just keep X blocks": access to historical data is important to Bitcoin's security model. Otherwise miners could invent coins out of thin air or steal coins and later-attaching nodes would know nothing about it, and couldn't prevent.   There is a careful balancing of motivations here: part of the reason someone doesn't amass a bunch of computing power to attack the system is because of how little they can get away with if they try.

staff
Activity: 4172
Merit: 8419
Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
We better make the speed of light higher so that optic fibers can allow much faster data transfers
I think we can achieve both of these by first making space-time riemannian instead of Pseudo-Riemannian. With euclidean space-time there should be no need for pesky limits like a constant speed of light, and the extra payload mass should be offset-able by simply moving some of the fuel you didn't need into the past.

ObOntopic:  While not all nodes need to constantly store the complete history— it is not so simple as waving some hands and saying "just keep X blocks": access to historical data is important to Bitcoin's security model. Otherwise miners could invent coins out of thin air or steal coins and later-attaching nodes would know nothing about it, and couldn't prevent.   There is a careful balancing of motivations here: part of the reason someone doesn't amass a bunch of computing power to attack the system is because of how little they can get away with if they try.
legendary
Activity: 1078
Merit: 1002
100 satoshis -> ISO code
I propose deleting all but the last 1000 blocks. this will keep the blockchain small and lean. as blocks build upon each other, older blocks are not needed. The only problem with this is if  there is a 1001 block chain fork which shoudl not happen.

Learn more about how a system works before you start guessing at ways to "improve" it.  Otherwise, your guesses aren't likely to make very much sense.

I propose reducing the energy released in a fission reaction.  This will keep the temperature of the reaction down, and make nuclear energy far safer.  The only problem is if there isn't enough energy released to be cost effective.

Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
sr. member
Activity: 403
Merit: 251
the problem is that hoarders still have coins in very old blocks. so pruning will only work on blocks where every single transaction is 'spent'

Won't help with hoarders in very old blocks but... the one transaction that makes an entire block 'spent',
or rather the last x transactions for each block, could get incentivized.

Like, another free 27kb/block reserved only for these transactions. Easy and painless soft fork.

Or even a small negative fee. Would miners do this? (for the greater good)  Roll Eyes
legendary
Activity: 1176
Merit: 1233
May Bitcoin be touched by his Noodly Appendage
We better make the speed of light higher so that optic fibers can allow much faster data transfers
Pages:
Jump to: