Suppose someone decides to upload their smut collection into the address space. What incentive does anyone else have to host the data? What means do they have to remove this data locally?
Like quantum physics, there is no concept of "local" in MerkleWeb. Each MerkleWeb network acts as 1 computer which many people and other technology can use at the same time.
Can nodes hold incomplete portions to be recreated by the requester?
Yes. Theoretically every bit could come from a different node, or more extreme than that, every bit could be compressed and be the result of some calculation that is spread across many nodes, so the bits don't have to exist directly anywhere in MerkleWeb as long as they can be quickly recalculated in any part of the network.
If every node purged specific data, would the network become invalidated?
...
Suppose someone uploaded false slanderous material. Could it be removed, modified?
The rules of any specific MerkleWeb network could be set up to allow deletions. As long as the state of a MerkleWeb is the result of a previous state of that MerkleWeb as defined by the rules encoded into its starting state, it will be a valid MerkleWeb. The rules can be anything since its a general computer. The deletion would become permanent if the network is set up to only store previous state of the last few minutes or years or whatever time range, with SVN-like compression only storing the "choice bits" and partial and overlapping snapshots of the calculations less often.
Suppose someone uploaded a service. Would its availability be proportional to its popularity? (something like reinforced weighting of paths within a neural network [each node hop adds data to a leaky bucket proxy cache]).
Yes. Self-Tuning Load Balancing at all levels is a core feature of MerkleWeb.
How could we address the 'latest' version of some material? (I assume data does not change in time and space)
In the 320 bit version of MerkleWeb, there are up to 2^256 virtual bits and up to 2^63 time rows of them. One way to do it would be to use each public key as a variable name and put its updated values at the current 64 bit time row and the same 256 bit address of the key. The data would always be a 320 bit address pointing at the "latest version of some material". Those 320 bits could extend 319 more bits into the space of "2^256 virtual bits" (allowing new broadcasts each cycle but possibly risking overlap with rare adjacent public keys) or the other direction up 319 more time rows (allowing new broadcasts every 320 cycles). Or some sections of the "2^256 virtual bits" could be allocated for such new material and optionally deallocated later when the sequence of "material" is deleted or moved. There's many ways to do it. Can anyone offer some ideas?
Do you have thoughts on purely off-network code execution? Rather than defining an interpreter build up from MerkleAxioms (NAND, 321-bit addresses), a block of data could declare "this is an ambiguous version of javascript" followed by code including predictable public global variables (same address space, sequential time space). As long as some node runs the code locally and writes the outputs, there isn't much to verify. We can argue about whether it is truthy, but it's no longer deterministic as far as MerkleWeb.
If something runs inside or outside of MerkleWeb does not affect if it can be mapped into the 320 bit address space and used as a part of MerkleWeb.
Example: Bitcoin's existing user accounts (public keys) and history of blocks could be used as a part of MerkleWeb's address space and seamlessly work with existing Bitcoin software and networks. That's because Bitcoin's behaviors are well-defined.
If something is not deterministic, it can only connect through a public-key which is used as the name of its input and output bits, called "choice bits", as it interacts with MerkleWeb. Example: People are often unpredictable, but they still need a way to interact with MerkleWeb.
Its not a problem. MerkleWeb wouldn't be useful if it couldn't interact with our chaotic world.
What prevents MerkleWeb from becoming primarily a global public copy-on-write versioned filesystem? Could that swamp the benefits or would it only make the network more robust?
http://en.wikipedia.org/wiki/Apache_Subversion (SVN) acts like a copy-on-write versioned filesystem but does it much more efficiently by only storing the changes and a little extra information for how to assemble all the changes into whole files again when an old version is requested. MerkleWeb is like SVN for the states of a general computer, while SVN is only for states of files and not the calculations.
If I recall from ZFS, addressing the full 128-bit address space would require more energy than required to boil the oceans. Does MerkleWeb require vastly more space to minimize random allocation collisions?
The most practical size for MerkleWeb to fit with today's technology is 320 bits because 256 are for the SHA256 algorithm (which is done reasonably fast in normal computers and there is some hardware designed specificly for SHA256) and 64 are for a 64 bit integer (which most programming languages and hardware calculate efficiently). As explained here
http://en.wikipedia.org/wiki/SHA-2 is 1 of a few industry standards (each a different bit size and strength) for secure-hashing arbitrary bits.
224 is the smallest bit size which has no known security vulnerabilities. If SHA256 is cracked, it doesn't break MerkleWeb, just makes it have weaker security, since if 2 data that have the same SHA256 hashcode ever meet eachother on the same computer (which has a calculation cost of the square root of the whole calculation cost of the system) then the contradiction would ripple out causing recursive rollbacks until the cause was found and fixed. It could be done more efficiently than the square root of total network cost by making Merkle Trees refer to eachother in the constants section in many overlapping ways, so even if SHA256 is cracked, it would have to crack it from many Continuation states at once, which is an exponentially harder problem. If the security is tuned up that high, at the cost of efficiency, MerkleWeb's data-integrity (not secrecy) would be exponentially more secure than Bitcoin.
If that kind of security is used, it could theoretically be done with a 64+64 (instead of 256+64) bit address space since collisions would have selection-pressure against their continued existence by the extra layer of security, and the first data to be created would usually win and prevent the use of any data whose hashcode collides with it. I don't want to get into that complication yet (leave it for when or if SHA256 is cracked and if we don't upgrade to SHA512 at that time), so I recommend 256+64 bit addresses.
Furthermore, if you expect the time component to represent cycles (rather than seconds or blocks), perhaps 64-bits is not sufficient? Running a trillion calculations per second is less than a year, or a few hundred years at a billion calcs per second.
Right. But most people don't want to pay more now and think ahead, so lets go for the 320 bit version and upgrade to "the 1024 bit version" (as I wrote above) later which has an address space big enough to fit an entire 320 bit MerkleWeb in 1 of its constant bit strings.
If "the full 128-bit address space would require more energy than required to boil the oceans", then the full use of a 1024 bit address space would take more energy than a "big bang".
Optional possible use of MerkleWeb on more advanced hardware later: If infinitely manyworlds multiverse theory is true, then Schrodinger's Cat is both alive and dead at the same time, in different branches of reality, and we would have enough computers (on parallel mostly overlapping Earths) to use the full address space. This is supported by empirical observations in quantum experiments where objects big enough to see by the naked eye are in 2 or more places at once. The fact that Humans evolved to understand Newtonian physics does not change the fact that physics only appears to be Newtonian until you use physics with enough accuracy. This is relevant to MerkleWeb because if manyworlds multiverse theory is true, then an infinite number of Earths, each running variations of the same MerkleWeb calculations, will always calculate the same 320 bit address for the same calculation because SHA256 is consistent that way, so if a way to communicate between these parallel mostly overlapping Earths was later found, MerkleWeb would already be designed to take full advantage of an Internet where packets are routed statistically in wave-interference patterns between parallel Earths. I wrote about "wave interference" in MerkleWeb's network routing behaviors being similar to quantum physics. If that theory and manyworlds multiverse theory are both true, and if MerkleWeb is run on a grid of wireless routers (because they transmit light and all light is quantum), then MerkleWeb would literally be a quantum computer. If any of those theories are not true, then MerkleWeb would only change the world with "unity mode" for the whole Internet. Its important that MerkleWeb not depend on any of these unproven theories, just have the potential to use them in more advanced hardware later. I'm a Mad-Scientist, but I'm still a scientist and go about things in a logical way.
When I say MerkleWeb would be a 320 or 1024 bit global computer, I don't mean we have to have hardware at the whole address space. Its mostly for avoiding collisions of hashcodes and future expansion as technology continues advancing at exponential speed. It doesn't cost much to use a bigger address space with the same hardware, but it is very expensive to fix old systems. How much money was spent fixing the 2-digit numbers in year-2000 upgrades?
To everyone...I could use some help in creating some example uses of MerkleWeb, and the technical details of what makes those examples work, of how MerkleWeb could be used to do each "design pattern" in
http://en.wikipedia.org/wiki/Software_design_pattern Those "design patterns" are the most common patterns observed in software designs. If we can explain how MerkleWeb can do these patterns better than existing systems, it will be taken much more seriously.
That doesn't include simple data structures, but we need some examples of that too.
A list is easy. In the constant bit string section, a list could be n bits where n is a multiple of 320, and each group of 320 bits is a pointer to the thing in the list. The list would be at the 320 address of the first bit of the first pointer in the list.
A set is an unordered list. It can be done to allow binary-searches by ordering all pointers in the set as 320 bit unsigned integers. To check if a certain pointer is in the list, do the binary search. To combine 2 sets, do 1 cycle of the mergesort algorithm which is very fast.
A map/hashtable is like a set except 2 times as big. Each entry would be 2 pointers. The first pointer works the same as in the set. The second is the value for that key.
Lists, sets, and maps can contain eachother. Since SHA256 is a SECURE-hash algorithm, there is no direct way to have a map, set, or list that contains itself or has any path back to itself. Its an acyclic network of these programming objects. Example: Java's list, set, and map objects don't allow cycles either since it causes errors in the Object.hashCode() and Object.equals(Object) functions. Java allows you to design lists, sets, and maps that can have cycles of containing themself, but normally its considered an error and can result in an infinite loop when hashCode() or equals(Object) is called. MerkleWeb could allow cycles of objects containing themself by using Public Keys as variables, a list set or map containing the Public Key, and the Public Key later broadcasting a pointer to that list set or map. That doesn't interfere with the secure-hashing so it works.
We could create a JVM on top of MerkleWeb, but it would be very complex. Instead, implementing the Scheme language is closer to what MerkleWeb naturally does. But before integration with other languages, we should create a new language from the bit and NAND level up, using lists, sets, maps, and other simple data-structures and calculations, as a proof-of-concept. Then on top of that, create the examples of how to use MerkleWeb to do
http://en.wikipedia.org/wiki/Software_design_patternWhy help? Bitcoin, at its best so far, was a 100 million dollar economy, created out of nothing, taking value from all dollars and moving that value into bitcoins. 100 million dollars is small compared to the trillions in the total Earth economy. I mean no disrespect to Bitcoin since its a very innovative and useful program, but at its core Bitcoin is a calculator that does plus and minus and a spread out way of creating the numbers based on "proof of work". Bitcoin is a calculator. MerkleWeb will be a computer. MerkleWeb is based on the Bitcoin security model and other similarities, but far more generally useful. If Bitcoin can move 100 million dollars, then MerkleWeb can move trillions of dollars. If you or your business helps develop the MerkleWeb prototypes and/or ideas, or any other way you know to help, then when it gets big your investment will pay off as people know you helped create it and will be very interested to work with you on other big projects. All the MerkleWeb design documents I've written at
http://sourceforge.net/projects/merkleweb and here in this thread are public-domain, which means I'm not taking a cut and nobody owns it and everyone has the right to use it, but still its an investment that is likely to indirectly move money toward anyone who helps in a big way, and will improve the world overall. "Unity mode" for the whole Internet, like VMWare did for programs of different operating systems on 1 computer at a time, is a game-changer. Will you help?