P.S. I see we have more n00bs voting
No and understandably too timid to reveal their myopic or irrationally biased justification. Perfect. The record will be here for when they get to eat humble pie.
It's not only about cloud storage, it can also replace the web and all the other use cases of the internet today (mail, messaging, media streaming, etc). It's just a question of time really until this "new web" goes viral and the old web will become pretty obsolete, much like the BBSs of yore, simply because it will be much easier and secure to use (first and foremost, no passwords and single-point-of-failure web-servers anymore). Conceptually, encrypted P2P clouds will abstract content and services away from location. This also will enable many use cases we can't even think of yet today. It's the next logical and necessary step in the evolution of networked computing. It'll also scale much better, and bandwidth and computation can be economically incentivized just as well.
http://www.youtube.com/watch?v=Wtb6L7Bg3zYSlick marketing has caused you to think these systems will be able to do things which they can not possibly do based on fundamental technical issues in their designs.
I saw that video nearly a year ago. My point remains from my prior reply to you upthread that MaidSafe and Storj are based on minimizing the amount of duplicate hard disk space employed for redundancy and their
economic unit is backed by hard disk space. Afaik, they have no mechanism to charge for bandwidth used[1], rather only a quid pro quo
exchange of equivalent proof-of-storage (via their verify algorithm I presume), thus they are not applicable for serving files to the public-at-large. They simply are not designed for hosting web sites. That is not their target market. Their design and target market is personal storage for your (or your enterprise’s) files (possibly shared with a few other people, but not the public-at-large) which you are access perhaps up to several times per day, but not 1000s of times per second. And afaics, they don’t solve the anonymity of the requesters IP address for data that is public to the public-at-large.
You miss a fundamental economic point that bandwidth is orders-of-magnitude more costly than hard disk space at any scale above a few accesses per day on average, yet bandwidth is also amazingly inexpensive too (which means it is very hard to pay-per-packet as you need some form of sub-micro-payments system or to trade in kind bandwidth-for-bandwidth).
The solution for anonymity for the requester of public files and for anonymity of the server is to make the improvements to onion routing that I have laid out in the OP, to build out the
Stateful Web, and for the servers to be hidden services.
For MaidSafe, I suppose clients could pay per access directly to nodes on the DHT, but the technical design for where the data is stored appears to be based on randomness combined with a ranking algorithm, that does not factor in incentives to compete for the greater orders-of-magnitude revenue from selling bandwidth or
in the absence of payment-per-access as is the case in the current design, then the incentive to game the system to not store fragments that are accessed frequently. There is a very complex game theory analysis that needs to be made of the system. In short, I suspect MaidSafe’s model is too complex to fully model and characterize. They can’t be using a quid pro quo exchange of bandwidth because the reciprocal exchanges would not be fungible in real-time, i.e. the other party may not possess a data fragment that the counter party needs at that instant (due to the randomized requirement of the fragments that is the fundamental feature of the system, there is no way this could be made fungible). I suppose that since only the owner of the original data can determine the address of a fragment, then DDoS on a fragment will cause the fragment to get blocked.
The concept of storing a file on multiple servers so we don’t have to depend on just one, is orthogonal, has always been the case with corporate scale servers, and such algorithms can be adapted to different architectures such as multiple hidden services or sitting behind one hidden service.
MaidSafe’s self-encryption and splitting a file up into sand grain sized (in terms of DHT address space collision probability) fragments is useful because in theory it means even the server nodes have no way to know anything about—and thus can’t discriminate against—the content of the data it is serving. And in theory the user doesn’t have to evaluate the reputation of any server node as would be necessary for traditional server. In theory this feature could be very valuable for files shared to the public-at-large. But traffic analysis can be used to correlate these fragments for files that have high traffic public-at-large access.
But I think that valuable portion of the system design can be split from the portion that tries to manage how many copies are stored on the system. So I think instead a flat economic model can be employed to incentivize multiple nodes to store the fragments. And put the decision of this choice and algorithm strategy in the hands of the client. This would be much less complex (easier to prove formally), much more decentralized, and much simpler. Then each node could sit behind a hidden service for sufficient IP address level anonymity.
The technical specifics of MaidSafe’s design is very sketchy and vague at this point. I seriously doubt whether they can handle the complexity and deliver the reliability they claim. The MaidSafe
MaidSafe system is a complex state machine (layered on top of a DHT) and I have not seen a formal analysis of all possible states including DDoS and the game theory of attacks on it. To debate this would go into more technical detail than I care to enter right now on this forum.
In terms of David Lavine’s analogy to an ant colony, note each ant has 250,000 brain cells of entropy to configure itself uniquely within the colony (not just the 4 categories Lavine wants to presume) and the collective state machine of the ant colony has the 10 million brain cells of a human brain.
Also listening to David ramble on, he is not person who can hit directly to the generative essence of an algorithm with precision. Rather he rambles on vague analogies. He appears to be smart (salesman with some technical acumen) but appearing tousled and lacking of the eloquent sharp precision of a highly accomplished engineer. For example, never did he address any bandwidth economics which should be one of the first things out of his mouth in a presentation.
Accomplished engineers can readily detect bullshit. I smell some but I think he believes in his work even if can’t quite pull it all together further than vague explanations. About a year ago I tried to read technical descriptions at their website and it was a maze of vagueness and technobabble without complete formalization and citations.
[1] | G.Paul and J.Irvine, A Protocol For Storage Limitations and Upgrades in Decentralised Networks. http://strathprints.strath.ac.uk/49515/1/Paul_Irvine_SIN14_upgrades_in_decentralised_networks.pdf
which showed that only 1% of network users were providing 73% of the network bandwidth needed to Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third- party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). SIN ’14 Sep 09 - 11 2014, Glasgow, Scotland UK ACM 978-1-4503-3033-6/14/09. http://dx.doi.org/10.1145/2659651.2659724. share les. In an earlier study carried out on the Gnutella peer-to-peer network, Adar et al. presented in [1] that 70% of users were not sharing any les, and that 1% of all users were actually providing responses to over half of network requests. Early peer-to-peer networks were more focused on con- serving bandwidth, since they were typically designed for the purpose of sharing popular les between users quickly, with- out relying on a central server. The MaidSafe decentralised network, while focused more on the provision of storage, faces similar challenges, where malicious users could poten- tially use up all of the storage capacity of the network.
legendary
Activity: 1764
Merit: 1007
It's not only about cloud storage, it can also replace the web and all the other use cases of the internet today (mail, messaging, media streaming, etc). It's just a question of time really until this "new web" goes viral and the old web will become pretty obsolete, much like the BBSs of yore, simply because it will be much easier and secure to use (first and foremost, no passwords and single-point-of-failure web-servers anymore). Conceptually, encrypted P2P clouds will abstract content and services away from location. This also will enable many use cases we can't even think of yet today. It's the next logical and necessary step in the evolution of networked computing. It'll also scale much better, and bandwidth and computation can be economically incentivized just as well. http://www.youtube.com/watch?v=Wtb6L7Bg3zY
full member
Activity: 154
Merit: 100
Some significant incoherence in the OP has been corrected. Please try re-reading it.
|