1. Factom is geared more for indexing data rather than storing it. in some cases they are one in the same, but the system makes no long term promise about quickly serving your data back to you whenever you request it. Think of it like pruning in bitcoin. Something the size of an omni or counterparty transaction would easily fit in an entry.
Now if you store a local copy of the data you need, then the factom data structures give you the ability to provide a proof later on to peers.
1.1 The current target price is about $0.001 per KiB, which equates to $1000/GiB. When the system goes decentralized in the future, it is not knowable what the Federated Servers will set this price at though.
2. It has more to do with a holistic approach to access control. If an element of immutable time is required for access, then it gives sysadmins time to respond to certain types of attacks.
3. two points.
a. We expect that communities will share their data subsets amongst each other freely. If property records are secured for some country, then it makes sense for various citizens in that country to download and share up the public data. People share up bittorrent data for free to members of their communities who want their particular datasets.
b. This is all public data, and someone will record it. Take centuries old newspapers for example. They can still be found, just not a the corner shop. You would need to go to a library to get it, or maybe pay a fee. I don't think the data will ever go away as long as there is some chance that it may be valuable, but it may be harder to get.
4. Countrywide would sign the hashes before placing them into factom. spam would be unsigned and could be ignored.
5. Paying for upload bandwidth in a distributed way is a really tricky unsolved problem in general. Both storj and maidsafe are exploring solutions to this. Every solution Paul and I explored could be gamed one way or another. If it is as successful as you are claiming, then the inflation subsidy would more than pay for upload services. There is no mining in factom so inflation does not get dissipated in electricity bills.
I made some back of the envolope numbers which Paul will present in Miami about how the data is segregated. I guess it isn't really an announcement, just an analysis, so I'll share it here. There are lots of wild guesses here, but it gives you the idea of how it will scale.
This is data per year.
Assume:
50 million shipping containers, each with 10 entries per year
117 million mortgages with 12 entries per year
5 million mortgage originations/title transfers with 30 entries per year
300 million health records with 10 entries per year
1 stock exchange with 1500 companies with 10 million entries per day each
assume 1k per entry.
http://i.imgur.com/5M0r611.pngtotal data in all the layers per year is 26.4 petabytes.
The shared part in blue is the directory blocks. This comes to 313 gigabytes.
the Entry Credit payment overhead (yellow) is only to prevent spam in the present. it can be discarded by most people, and they can just store the balance info. (think UTXO set vs full chain in Bitcoin) Notice factoids do not even show up on the graph. those are just a way to get entry credits in the first place, and there would be a 10,000-100,000 ratio in EC commits compared to factoid transfers.
Only very small subsets of the red Entry Block data is only needed by the applications to prove their states. your application would only need a small sliver of the red.
If the annual user data is 19.5 petabytes, you would only need to sift through blue 300 gigabytes to find the data for your application.
5.1 data in factom is separated into chains, which are a clean way to segregate applications and what data you are storing. I imagine in the future, you will only be storing and uploading the data which is important to your application. There are also plans to segment the network, so peers form subnets which only relay some of the data.
6. Because Sibyl. We get advantages from having a predefined authority set. As that set gets bigger it gets harder to manage. Also there need to be few enough of them to matter to vote out. see Dunbars number.
7. yes, sorry, we tweaked the protocol between the whitepaper publishing and launch. They are indeed every 10 minutes. we have a few other mistakes in the paper:
https://github.com/FactomProject/FactomDocs/blob/master/Factom_Whitepaper_Errata.md8. Thats where we started out with, but that one party could censor an individual while leaving the rest of the network working. would all of bitcoin switch over to a new network just because the miners were censoring one particular person? the next step is to allow free entry, but then to prove the negative, an application would need to download data from all possible sources, making spam trivial.
more thoughts here:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-March/007721.htmlThanks for the fantastic reply! And thanks for those links as well. I still have a few concerns and inquires.
2. Would you care to give an example?
3. What about private data?
5. I either don't understand or I didn't phrase my question properly. Do full nodes hold the same amount of information as Federated Servers? Federated Servers would be required to store everything in blue and everything in red, right? Would full nodes have to do the same? How much space is that?
6. Frankly, I'm not much of a security expert and while I have a vague idea of how a Sybil attack works I don't understand how one could be performed without a fixed number of Federated and Audit Servers. If you would care to explain, it would be much appreciated.
8. I'm beginning to grasp the whole censorship resistant while simultaneously spam resistant concept. However, Factom seems like an expensive solution to prevent the occasional censor and in a free market I feel like a trusted 3rd party who engaged in such practices would die while other blockchain stamping services took their place. Is this a bad assumption?
And if it's not too much trouble, I'd like to add a couple more questions...
9. Given that the factoid's real value, speculation aside, is based completely on the amount of entries entered into the protocol, given that currently less than 2000 entries are made a day, and given that when milestone 3 is achieved, which is expected to happen in the coming months, 73000 factoids will be generated each month, the current price for a factoid seems vastly overvalued. Is this all just speculation, or am I missing something?
10. Is there anything I can do to help out??
Once again, thanks for the helpful response.