Pages:
Author

Topic: MerkleWeb - statistical Godel-like secure-but-not-perfect global Turing Machine (Read 12230 times)

legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.

I will respond here for the moment because I don't feel like registering yet another password to keep track of.

My most imortant question is: how essential is the walkie-talkie format?

As Asimov1 pointed out, you are going to find yourself duplicating the existing cell-phone network in many ways.

I can confirm that it will be difficult to broadcast 10 miles. CB radios can do it, but they use about 4 watts of power. The USB interface supplies only 2.5 Watts.

Highly directional antennas will only work on a base station (read: not constantly moving around); also known as a "radio tower" if mounted above the ground for good coverage.

Patents came up. The 802.11n standard uses spatial-division multiplexing. Multiple antennas are driven slightly out-of-phase (glossing over details I don't know alert) allowing the beam pattern to be "steered" somewhat (all without moving parts). Any radio technology from the last 20 years is likely dripping with patents.

Finding spectrum will be a problem. During the last sectrum auction in Canada, the incumbents doubled their ammortized infrastructure costs by bidding something stupid like $4 Billion. They expect to make that back by charging the average cell-phone user $40/month. You will have trouble getting spectrum cheaper because the incumbents won't sell spectrum for less than they paid at auction. So, you are limited to the very limited "free for all" spectrum slices. These typically restrict transmit power, such that your range is likely to be closer to 1 mile than 10. Using the visible spectrum is a possibility for line-of-sight hops.

Finally, I would advise you to not mix the merkleweb proposal with the IP-Walkietalkie proposal. Both are complicated enough on their own that the chance of failure is high. If both projects are successful, you can combine them later.
legendary
Activity: 1372
Merit: 1002
sr. member
Activity: 316
Merit: 250
(I'll respond to the last comments later)

Me and James Jaeger (director of Fiat Empire - The Federal Reserve Is Unconstitutional and other documentary films, which you can watch on Youtube) are talking about forming a business together to create "IP Walkie Talkies", a wireless mesh network of cell-phone like devices that form a free peer to peer Internet thats off the grid, no monthly fees, no central control, freedom of speech, and it may have some similarities to MerkleWeb. We're still talking about how to build it. Soon we may be looking for employees and business partners. Anyone interested? Join the talk here:

http://www.kurzweilai.net/forums/topic/a-technological-renaissance-peer-to-peer-non-internet-ip-walkie-talkie-nets

We have the right to freedom of speech therefore also have the right to create and use communication systems which are free of anyone controlling that communication.

We could put every Internet Service Provider and Cell Phone business, out of business, since we would undercut their prices at ZERO per month and no censoring or central control. Don't worry about us becoming a monopoly, since my condition of helping with the project is it is toward an open source hardware and software, but not necessarily starting that way. Also it will prove to everyone that peer to peer Internet can work on a large scale, so others will start work on other forms of peer to peer large scale communication even if something goes wrong with that plan.

Its not hard to obsolete an infrastructure that's designed to crawl slowly as it waits for permission from a hierarchy organization. Peer to peer is naturally faster. For example, Apache Cassandra has a mostly peer to peer core and is used by Netflix to stream their videos, but peer to peer is only possible in server connections, while most Internet connections are client and can't connect directly to eachother (a form of censoring). Peer to peer is the future. Want to free the Internet and get rich at the same time?

Technical estimation, quoted from that thread:
Quote
The technical parts can be done, period. Here's why:

The following is true of IP Walkie Talkies and the existing Internet. Its a property of communication through space, not any specific communication system unless it can communicate faster than light. The existing Internet can be modeled using similar equations by counting the Internet Backbone as a large number of connected devices. These calculations are an estimate of what we need, to be calculated more exactly later. Not all of the variables below are used. Some of them are there for us to talk about later in this thread, to have standard words for these things.

networkArea = flat area the devices cover.

widthOfNetwork = squareRoot(networkArea). We're not going to consider the corners yet. Its approximate.

deviceQuantity = how many devices spread evenly in networkArea.

deviceWidthOfNetwork = squareRoot(deviceQuantity)

distanceBetweenDevices = widthOfNetwork / deviceWidthOfNetwork

radioDistance = how far the devices can reliably transmit a signal to other devices.

radioHopsWidthOfNetwork = widthOfNetwork / radioDistance

inefficiencyMultiplier = how many times more bandwidth than the perfect straight line path is actually used.

connectionsPerDevice = how many near devices each device communicates with.

bandwidthMultiplierForLongestPath = radioHopsWidthOfNetwork*inefficiencyMultiplier. This means how many times more bandwidth is used to send some bytes from one side of the network to the other, relative to how much bandwidth each device uses to do that together.

averagePathLength = average distance between endpoints where 2 devices communicate to eachother, as people command the devices to do.

bandwidthMultiplierForAveragePath = bandwidthMultiplierForLongestPath * averagePathLength / widthOfNetwork

maxBitsPerSecondPerDevice = bandwidth of each device. This is not about how much bandwidth it can actually use in the network.

fractionOfTimePeopleActivelyUseIt = People don't use their Internet connections or phones all the time. Even when reading a webpage, the computer is mostly idle. It uses the connection when you click or stream. This is about the times people normally use it, not in the middle of the night or early morning. Sitting at the computer isn't using it. Actively downloading or talking is using it. People will be motivated not to download constantly because the algorithms will give you only as much bandwidth as your hardware puts in.

averageInternetSpeedBitsPerSecondPerDevice = maxBitsPerSecondPerDevice / bandwidthMultiplierForAveragePath

activeInternetSpeedBitsPerSecondPerDevice = averageInternetSpeedBitsPerSecondPerDevice / fractionOfTimePeopleActivelyUseIt. This is how much speed people will actually see in the IP Walkie Talkie Internet and voice communication system, where voice is transmitted digitally as bits so its simply a realtime Internet at the core.

bitsPerSecondForAudio = how many bits per second used for people talking to eachother through the IP Walkie Talkies.

Example:

networkArea = USA is 9,826,675 square kilometers. http://en.wikipedia.org/wiki/USA

widthOfNetwork = 3135 kilometers. Remember these calculations are approximate.

radioDistance = 5 kilometers.

radioHopsWidthOfNetwork = 627 hops.

inefficiencyMultiplier = 4, since its not going to find the exact shortest path, and we need some redundancy so theres no single point of failure. A single point of failure is often ok in a hierarchy based Internet since theres much fewer and each longer hops, like 20 hops, but not in a peer to peer system which must have many more and shorter faster hops.

bandwidthMultiplierForLongestPath = 2508

averagePathLength = 1000 kilometers.

bandwidthMultiplierForAveragePath = 800.

maxBitsPerSecondPerDevice = 8,000,000. This is 1 megabyte per second. Most network cards advertise at least 100 megabits (12.5 megabytes) per second. We're estimating lower because we have to do it on rechargable batteries and longer distance to the next device.

averageInternetSpeedBitsPerSecondPerDevice = 10,000

fractionOfTimePeopleActivelyUseIt = 1/20. Remember this is only when you click or stream, not when you're just reading a webpage.

activeInternetSpeedBitsPerSecondPerDevice = 200,000 bits per second.

That is 25 kilobytes per second. CD quality audio is 88 kilobytes/second. 25 is more than enough for phone audio (very long range Walkie Talkies, not necessarily connecting into the phone network, and not providing 911 service) and is also enough for very low quality streaming video and to load most webpages in a few seconds. Don't forget there is no monthly fee. Just buy the IP Walkie Talkie and plug its USB wire into your computer for free Internet service for life, and talk to people across the country with no monthly fee. The 25 kilobytes per second (if the research finds the radioDistance and other numbers match this example) is paid for by your device contributing to the wireless network while you're not using it. Its slower than "high speed Internet" but as technology advances this freedom Internet and voice communication will be faster.

...

Constant data, like Bittorrent files or data on the Bitcoin network, could be done a different and much more efficient way, especially if we included hardware to quickly calculate a few common secure-hash algorithms, but those improvements for specific types of content can be done later.
legendary
Activity: 1372
Merit: 1002
That claim in the Wikipedia article was cited:
Quote from: Wikipedia
Thank you.
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
I don't know much about homomorphic encryption either.


By the way, can someone explain me how homomorphic encryption can be used to "create secure voting systems"?


That claim in the Wikipedia article was cited:

Quote from: Wikipedia

Even if it does work as described, I would be concerned by the lack of paper-trail. It is very hard to trust that a computer is doing what it says it is doing. Even if the vote tabulation is beyond reproach (thanks to the proposed distributed turing machine), the computer being used for voting may be able to do statistical vote substitution (even in the face of vote reciepts).
legendary
Activity: 1372
Merit: 1002
I think I now get what you mean (although I'm an homomorphic encryption newbie).

And, yes, I guess you could transfer part of the algorithm to data. For example, a trick that comes to mind...
You could train a neural network to be equivalent to a given function or part of an algorithm, then use the weighs of the network as part of the input (although constant) and be able to also encrypt them, obscuring the algorithm.
Probably not very attractive for merkleweb if the number of inputs are a limitation.

By the way, can someone explain me how homomorphic encryption can be used to "create secure voting systems"?
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
What I think what jtimon getting at using the Socratic method is that the data is secret, not the algorithm. That probably has all kinds of interesting implications for data leakage or separating data and executable code.
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
Your link is fishy, try this: http://eprint.iacr.org/

Also: http://css.csail.mit.edu/cryptdb/

Quote
CryptDB ... manages to emulate fully homomorphic encryption for most of the functions of the SQL databases used in many applications, computing only with encrypted data and adding just 15% to 26% to those applications’ computing time. ... RSA scheme, for instance, lets you multiply encrypted numbers, and another called the Paillier scheme lets you add them. ... plenty of schemes allow you to compare encrypted information and see whether the scrambled codes are equal ... SQL queries in a database are composed of relatively few types of operations: equal to, less than, summing up, sorting ... we were able to find an encryption scheme that is quite efficient at computing on encrypted data. ... switch between crypto systems on the fly depending on the operation. The data in the database is encrypted in multiple layers of different encryption, what the researchers call an “onion” of encryption ... it does “leak” information about the underlying data when enough outer layers of encryption are removed

--- Andy Greenberg, Forbes
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
...the faster algorithm only has to be written once (unless encrypted).

Wait a minute...You mean the network can run programs without knowing what they do?
I don't get it. Can you explain it?


It is possible that I made a slight cognitive leap based on my limited understanding of Homomorphic Encryption.

In practice, (depressingly) few file formats clearly distinguish between executable code and data. Often due to programmer laziness or feature creep from management[Citation Needed].

I was making the apparently false assumption that the key-renewal algortihm described in the Damien Stehle and Ron Steinfeld paper was Turing-complete and encrypted. However, skimming the paper without understanding most of the math, it looks like the renewal algorithm is known, but additional "hints" are stored with the public key. This extra information is used to reencrypt/error-correct the private key stored with the encrypted data.

Edit: the one time I don't check links... or were you concerned by the lack of peer review?
legendary
Activity: 1372
Merit: 1002
...the faster algorithm only has to be written once (unless encrypted).

Wait a minute...You mean the network can run programs without knowing what they do?
I don't get it. Can you explain it?

legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
Testing on a LAN should be possible, but there will be so much overhead that it would only work as a proof-of-concept.

One thing that needs to be clarified is how do you force poeple to write using "intelligent programming" rather than "brute force"? I think it is obvious that injecting "choice bits" into the system will be more expensive than spending bitcoin. Brute-force algorithms may even involve fewer "choice bits" than a much faster algorithms. Though, you pointed out, the faster algorithm only has to be written once (unless encrypted).

I have no opinion about the viability of Homomorphic encryption. I expect for many public-interest calculations, keeping them public may be a good thing. For example, climate modeling: Critics would be able to see exactly how much the data was "fudged."

full member
Activity: 215
Merit: 100
Shamantastic!
What would the time frame be for full implementation to alpha and are there any precursors or valid test cases that we can implement prior to a full product being available? Is the light version of this idea just as valid or does it need to meet any threshold value between inception and full run?

FF
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
Like quantum physics, there is no concept of "local" in MerkleWeb. Each MerkleWeb network acts as 1 computer which many people and other technology can use at the same time.

As long as we are not using quantum computers, data is stored (or its XOR RAID-5 like difference can be calculated from data stored) somewhere, no? If enough data is missing (if enough RAID-5 disks catch on fire) is the entire address space corrupted?


The rules of any specific MerkleWeb network could be set up to allow deletions.

Merkle-tree-pruning, yes. But if some code today addresses data that later 'disappears', doesn't the result of the computation become invalidated (not possible to be validated) at some point in the future?


Quote
How could we address the 'latest' version of some material? (I assume data does not change in time and space)

... There's many ways to do it. Can anyone offer some ideas?

Quote
What prevents MerkleWeb from becoming primarily a global public copy-on-write versioned filesystem? Could that swamp the benefits or would it only make the network more robust?

... (SVN) acts like a copy-on-write versioned filesystem but does it much more efficiently by only storing the changes and a little extra information for how to assemble all the changes into whole

If every address bit is { 1 | 0 | null }, then like SVN, ZFS, Btrfs, CoW, every bit in time is a cumulative difference of all historical changes from its preceding past, then 'now' is the same as TIME = ( MAX_64_BIT_INT - 1 ). I don't think this will scale at all, but it's theoretically solid.


I find the idea impressive, but I don't think it will actually work. ... Even with the incredible compression you expect to achieve, I doubt a global turing machine will need less than 2000 transactions per second. ... (such as generating vanity bitcoin addresses) will be possible without Fully Homomorphic encryption.

Some types of verified work, particularly when you wish to keep secrets from the worker, may require cutting edge encryption, but I would think most repetitive tasks can be verified through random reprocessing checks.

Many of BenRayfield's abstractions are beyond me, though fascinating. As far as I am concerned, a truly distributed generic computer(s) is very useful and would work. It would be very nice to be able to store data in the network (unlocked by local private keys) as well as distribute processing and pay only for cycles used. In fact, the user of services, rather than the producer of services, may actually pay.

I would be more than willing to offer my local processing time, storage space, and bandwidth in exchange for credits for future distributed computation, storage, and bandwidth. I don't particularly care if those data and services are within one single MerkleWeb, or one of numerous addressible distributed grid computers. Those credits could either replace bitcoin, give it inherent value, or compliment bitcoin nicely.
sr. member
Activity: 316
Merit: 250
The core design of MerkleWeb does not support privacy or encryption, but since its a general computer those things can be layered on top.

http://en.wikipedia.org/wiki/Homomorphic_encryption
Quote
Homomorphic encryption is a form of encryption where a specific algebraic operation performed on the plaintext is equivalent to another (possibly different) algebraic operation performed on the ciphertext.

The Wikipedia page also says that a proof-of-concept of homomorphic encryption exists.

Quote
Craig Gentry[4] using lattice-based cryptography showed the first fully homomorphic encryption scheme as announced by IBM on June 25, 2009.[5][6] His scheme supports evaluations of arbitrary depth circuits. His construction starts from a somewhat homomorphic encryption scheme using ideal lattices that is limited to evaluating low-degree polynomials over encrypted data. (It is limited because each ciphertext is noisy in some sense, and this noise grows as one adds and multiplies ciphertexts, until ultimately the noise makes the resulting ciphertext indecipherable.) He then shows how to modify this scheme to make it bootstrappable—in particular, he shows that by modifying the somewhat homomorphic scheme slightly, it can actually evaluate its own decryption circuit, a self-referential property. Finally, he shows that any bootstrappable somewhat homomorphic encryption scheme can be converted into a fully homomorphic encryption through a recursive self-embedding. In the particular case of Gentry's ideal-lattice-based somewhat homomorphic scheme, this bootstrapping procedure effectively "refreshes" the ciphertext by reducing its associated noise so that it can be used thereafter in more additions and multiplications without resulting in an indecipherable ciphertext. Gentry based the security of his scheme on the assumed hardness of two problems: certain worst-case problems over ideal lattices, and the sparse (or low-weight) subset sum problem.
Regarding performance, ciphertexts in Gentry's scheme remain compact insofar as their lengths do not depend at all on the complexity of the function that is evaluated over the encrypted data. The computational time only depends linearly on the number of operations performed. However, the scheme is impractical for many applications, because ciphertext size and computation time increase sharply as one increases the security level. To obtain 2^k security against known attacks, the computation time and ciphertext size are high-degree polynomials in k. Recently, Stehle and Steinfeld reduced the dependence on k substantially.[7] They presented optimizations that permit the computation to be only quasi-k^3.5 per boolean gate of the function being evaluated.

Do you have a reason to think faster kinds of homomorphic encryption won't be found later?

Also, the binary form of software can be done in such a confusing way that its similar to encryption. That's why its usually impractical to modify a software without having the source-code, especially C code and other low-level languages. Some ways of doing it make it easy to translate the binary to readable form, while obfuscators are more like encryption. Combining that with just a little homomorphic encryption (which is practical speed for lower security levels) may be enough for practical privacy while the bottom layer of MerkleWeb is publicly visible and searchable.

Also, I wrote a few things about "privacy" earlier in this thread.

The idea for MerkleWeb started as a way to have reliable calculations in a global network. Privacy was the afterthought. Its not a core feature and would need to be built as a higher layer, like we can still have privacy on the Internet today while the lower layers of http://en.wikipedia.org/wiki/OSI_model are not secure.

I know the first (and maybe the newest?) version of Bitcoin isn't scalable, but the math design of Bitcoin is scale-free since each computer could do only a small part of the calculations and store only some of the blocks, making sure it all overlaps correctly.

Quote from Bitcoin's design doc http://bitcoin.org/bitcoin.pdf
Quote
If two nodes broadcast different versions of the next block simultaneously, some nodes may receive one or the other first.  In that case, they work on the first one they received, but save the other branch in case it becomes longer.  The tie will be broken when the next proof-of-work is found and one branch becomes longer; the nodes that were working on the other branch will then switch to the longer one.

New transaction broadcasts do not necessarily need to reach all nodes.  As long as they reach many nodes, they will get into a block before long.  Block broadcasts are also tolerant of dropped messages

...

7. Reclaiming Disk Space
Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space.  To facilitate this without breaking the block's hash, transactions are hashed in a Merkle Tree [7][2][5], with only the root included in the block's hash. Old blocks can then be compacted by stubbing off branches of the tree.  The interior hashes do not need to be stored.

...

8. Simplified Payment Verification
It is possible to verify payments without running a full network node.  A user only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he's convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it's timestamped in.  He can't check the transaction for himself, but by linking it to a place in the chain, he can see that a network node has accepted it, and blocks added after it further confirm the network has accepted it.

As such, the verification is reliable as long as honest nodes control the network, but is more vulnerable if the network is overpowered by an attacker.  While network nodes can verify transactions for themselves, the simplified method can be fooled by an attacker's fabricated transactions for as long as the attacker can continue to overpower the network.  One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency.  Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.

Those parts of Bitcoin's design doc describe paths of expanding Bitcoin's design so each computer only needs to use a small part of the network. MerkleWeb will be a continuation of those ideas optimized for most calculations being deterministic and a very small fraction of calculations being digitally-signed nondeterministic "choice bits".

I'll have to think about this some more, but so far I'm very confident that MerkleWeb's design ranges from scale-free to the square-root of the efficiency you'd expect, depending on how much reliability you need and how much of the network is run in each computer and what fraction of the computers can be hacked and you still need MerkleWeb to work reliably with the remaining parts, and other things to tune.

When they created http://en.wikipedia.org/wiki/Arpanet it was slower than a dial-up-modem, but its purpose was reliable communications even if much of the communications grid goes down. Since then, the Internet has come under centralized control, has become a hierarchy of servers and mostly client connections, where no 2 normal Internet connections can contact eachother without going through a server computer which are mostly owned by businesses. Regardless of how slow it may be, MerkleWeb would be what Arpanet was supposed to become, a decentralized reliable communication grid that works even in severe battle conditions. In MerkleWeb, if half the Earth is nuked, the other half of the computers would experience a few seconds delay and lose the last few seconds of clicks and button presses but mostly continue without interruption at half speed. In general, but not taken to that extreme, that's what Arpanet was supposed to become.

Quote
I find the idea impressive, but I don't think it will actually work. As far as I can tell, the proposal is technically sound, but abstracts away real-world concerns like bandwidth and CPU time. It is like you took Computing Science to its logical conclusion.

Most digital technology is designed for use with brute-force computing strategies. MerkleWeb's design sacrifices brute-force speed for unlimited abstraction potential. An analogy: Scheme is to Common Lisp as MerkleWeb is to VMWare. Scheme supports unlimited levels of recursion through http://en.wikipedia.org/wiki/Tail-call_optimization as the normal way to do control-flow, a style that would crash languages like Java with a StackOverflowException. You can run operating systems inside other operating systems inside operating systems... through VMWare, for example, but it gets slower each time. Because of its potential for unlimited NAND level abstraction of the whole network as 1 system, MerkleWeb has the potential to run virtual/emulated systems inside eachother to any depth without losing any speed, like recursive functions don't get slower at deeper recursion levels. It comes down to a choice between brute-force speed and the potential for more intelligent programming. Without using the Internet as a Universal Turing Machine, that potential is not available. Example: Billions of dollars could be used to "rage against the machine" about how the Internet is centrally controlled, or we could offer a much better way to organize it public-domain and free to everyone as MerkleWeb. Which strategy is more effective in that case, brute-force or slower but more intelligent abstractions? Humans can think more abstractly than Monkeys and Tigers and Elephants is why we are the dominant species. We're certainly not the strongest or fastest. So why continue using brute-force strategies in technology?
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
I find the idea impressive, but I don't think it will actually work. As far as I can tell, the proposal is technically sound, but abstracts away real-world concerns like bandwidth and CPU time. It is like you took Computing Science to its logical conclusion.

I think Scalabilty has the potential to be a real problem for bitcoin. In the example given, a "VISA" level of transaction processing (2000 transactions per second) is expected to have blocks over 1GB in size. I expect this will force many miners on home or even business Internet connections off the network (Colocation with fiber access would be needed instead).

Even with the incredible compression you expect to achieve, I doubt a global turing machine will need less than 2000 transactions per second. This implies a huge amount of bandwidth usage and local storage demands (for saving every fork).

I also don't think confidential processing (such as generating vanity bitcoin addresses) will be possible without Fully Homomorphic encryption. Found out about that concept reading this very forum Smiley


sr. member
Activity: 316
Merit: 250
Suppose someone decides to upload their smut collection into the address space. What incentive does anyone else have to host the data? What means do they have to remove this data locally?

Like quantum physics, there is no concept of "local" in MerkleWeb. Each MerkleWeb network acts as 1 computer which many people and other technology can use at the same time.

Quote
Can nodes hold incomplete portions to be recreated by the requester?

Yes. Theoretically every bit could come from a different node, or more extreme than that, every bit could be compressed and be the result of some calculation that is spread across many nodes, so the bits don't have to exist directly anywhere in MerkleWeb as long as they can be quickly recalculated in any part of the network.

Quote
If every node purged specific data, would the network become invalidated?

...

Suppose someone uploaded false slanderous material. Could it be removed, modified?

The rules of any specific MerkleWeb network could be set up to allow deletions. As long as the state of a MerkleWeb is the result of a previous state of that MerkleWeb as defined by the rules encoded into its starting state, it will be a valid MerkleWeb. The rules can be anything since its a general computer. The deletion would become permanent if the network is set up to only store previous state of the last few minutes or years or whatever time range, with SVN-like compression only storing the "choice bits" and partial and overlapping snapshots of the calculations less often.

Quote
Suppose someone uploaded a service. Would its availability be proportional to its popularity? (something like reinforced weighting of paths within a neural network [each node hop adds data to a leaky bucket proxy cache]).

Yes. Self-Tuning Load Balancing at all levels is a core feature of MerkleWeb.

Quote
How could we address the 'latest' version of some material? (I assume data does not change in time and space)

In the 320 bit version of MerkleWeb, there are up to 2^256 virtual bits and up to 2^63 time rows of them. One way to do it would be to use each public key as a variable name and put its updated values at the current 64 bit time row and the same 256 bit address of the key. The data would always be a 320 bit address pointing at the "latest version of some material". Those 320 bits could extend 319 more bits into the space of "2^256 virtual bits" (allowing new broadcasts each cycle but possibly risking overlap with rare adjacent public keys) or the other direction up 319 more time rows (allowing new broadcasts every 320 cycles). Or some sections of the "2^256 virtual bits" could be allocated for such new material and optionally deallocated later when the sequence of "material" is deleted or moved. There's many ways to do it. Can anyone offer some ideas?

Quote
Do you have thoughts on purely off-network code execution? Rather than defining an interpreter build up from MerkleAxioms (NAND, 321-bit addresses), a block of data could declare "this is an ambiguous version of javascript" followed by code including predictable public global variables (same address space, sequential time space). As long as some node runs the code locally and writes the outputs, there isn't much to verify. We can argue about whether it is truthy, but it's no longer deterministic as far as MerkleWeb.

If something runs inside or outside of MerkleWeb does not affect if it can be mapped into the 320 bit address space and used as a part of MerkleWeb.

Example: Bitcoin's existing user accounts (public keys) and history of blocks could be used as a part of MerkleWeb's address space and seamlessly work with existing Bitcoin software and networks. That's because Bitcoin's behaviors are well-defined.

If something is not deterministic, it can only connect through a public-key which is used as the name of its input and output bits, called "choice bits", as it interacts with MerkleWeb. Example: People are often unpredictable, but they still need a way to interact with MerkleWeb.

Its not a problem. MerkleWeb wouldn't be useful if it couldn't interact with our chaotic world.

Quote
What prevents MerkleWeb from becoming primarily a global public copy-on-write versioned filesystem? Could that swamp the benefits or would it only make the network more robust?

http://en.wikipedia.org/wiki/Apache_Subversion (SVN) acts like a copy-on-write versioned filesystem but does it much more efficiently by only storing the changes and a little extra information for how to assemble all the changes into whole files again when an old version is requested. MerkleWeb is like SVN for the states of a general computer, while SVN is only for states of files and not the calculations.

Quote
If I recall from ZFS, addressing the full 128-bit address space would require more energy than required to boil the oceans. Does MerkleWeb require vastly more space to minimize random allocation collisions?

The most practical size for MerkleWeb to fit with today's technology is 320 bits because 256 are for the SHA256 algorithm (which is done reasonably fast in normal computers and there is some hardware designed specificly for SHA256) and 64 are for a 64 bit integer (which most programming languages and hardware calculate efficiently). As explained here http://en.wikipedia.org/wiki/SHA-2 is 1 of a few industry standards (each a different bit size and strength) for secure-hashing arbitrary bits.

224 is the smallest bit size which has no known security vulnerabilities. If SHA256 is cracked, it doesn't break MerkleWeb, just makes it have weaker security, since if 2 data that have the same SHA256 hashcode ever meet eachother on the same computer (which has a calculation cost of the square root of the whole calculation cost of the system) then the contradiction would ripple out causing recursive rollbacks until the cause was found and fixed. It could be done more efficiently than the square root of total network cost by making Merkle Trees refer to eachother in the constants section in many overlapping ways, so even if SHA256 is cracked, it would have to crack it from many Continuation states at once, which is an exponentially harder problem. If the security is tuned up that high, at the cost of efficiency, MerkleWeb's data-integrity (not secrecy) would be exponentially more secure than Bitcoin.

If that kind of security is used, it could theoretically be done with a 64+64 (instead of 256+64) bit address space since collisions would have selection-pressure against their continued existence by the extra layer of security, and the first data to be created would usually win and prevent the use of any data whose hashcode collides with it. I don't want to get into that complication yet (leave it for when or if SHA256 is cracked and if we don't upgrade to SHA512 at that time), so I recommend 256+64 bit addresses.

Quote
Furthermore, if you expect the time component to represent cycles (rather than seconds or blocks), perhaps 64-bits is not sufficient? Running a trillion calculations per second is less than a year, or a few hundred years at a billion calcs per second.

Right. But most people don't want to pay more now and think ahead, so lets go for the 320 bit version and upgrade to "the 1024 bit version" (as I wrote above) later which has an address space big enough to fit an entire 320 bit MerkleWeb in 1 of its constant bit strings.

If "the full 128-bit address space would require more energy than required to boil the oceans", then the full use of a 1024 bit address space would take more energy than a "big bang".

Optional possible use of MerkleWeb on more advanced hardware later: If infinitely manyworlds multiverse theory is true, then Schrodinger's Cat is both alive and dead at the same time, in different branches of reality, and we would have enough computers (on parallel mostly overlapping Earths) to use the full address space. This is supported by empirical observations in quantum experiments where objects big enough to see by the naked eye are in 2 or more places at once. The fact that Humans evolved to understand Newtonian physics does not change the fact that physics only appears to be Newtonian until you use physics with enough accuracy. This is relevant to MerkleWeb because if manyworlds multiverse theory is true, then an infinite number of Earths, each running variations of the same MerkleWeb calculations, will always calculate the same 320 bit address for the same calculation because SHA256 is consistent that way, so if a way to communicate between these parallel mostly overlapping Earths was later found, MerkleWeb would already be designed to take full advantage of an Internet where packets are routed statistically in wave-interference patterns between parallel Earths. I wrote about "wave interference" in MerkleWeb's network routing behaviors being similar to quantum physics. If that theory and manyworlds multiverse theory are both true, and if MerkleWeb is run on a grid of wireless routers (because they transmit light and all light is quantum), then MerkleWeb would literally be a quantum computer. If any of those theories are not true, then MerkleWeb would only change the world with "unity mode" for the whole Internet. Its important that MerkleWeb not depend on any of these unproven theories, just have the potential to use them in more advanced hardware later. I'm a Mad-Scientist, but I'm still a scientist and go about things in a logical way.

When I say MerkleWeb would be a 320 or 1024 bit global computer, I don't mean we have to have hardware at the whole address space. Its mostly for avoiding collisions of hashcodes and future expansion as technology continues advancing at exponential speed. It doesn't cost much to use a bigger address space with the same hardware, but it is very expensive to fix old systems. How much money was spent fixing the 2-digit numbers in year-2000 upgrades?



To everyone...

I could use some help in creating some example uses of MerkleWeb, and the technical details of what makes those examples work, of how MerkleWeb could be used to do each "design pattern" in http://en.wikipedia.org/wiki/Software_design_pattern Those "design patterns" are the most common patterns observed in software designs. If we can explain how MerkleWeb can do these patterns better than existing systems, it will be taken much more seriously.

That doesn't include simple data structures, but we need some examples of that too.

A list is easy. In the constant bit string section, a list could be n bits where n is a multiple of 320, and each group of 320 bits is a pointer to the thing in the list. The list would be at the 320 address of the first bit of the first pointer in the list.

A set is an unordered list. It can be done to allow binary-searches by ordering all pointers in the set as 320 bit unsigned integers. To check if a certain pointer is in the list, do the binary search. To combine 2 sets, do 1 cycle of the mergesort algorithm which is very fast.

A map/hashtable is like a set except 2 times as big. Each entry would be 2 pointers. The first pointer works the same as in the set. The second is the value for that key.

Lists, sets, and maps can contain eachother. Since SHA256 is a SECURE-hash algorithm, there is no direct way to have a map, set, or list that contains itself or has any path back to itself. Its an acyclic network of these programming objects. Example: Java's list, set, and map objects don't allow cycles either since it causes errors in the Object.hashCode() and Object.equals(Object) functions. Java allows you to design lists, sets, and maps that can have cycles of containing themself, but normally its considered an error and can result in an infinite loop when hashCode() or equals(Object) is called. MerkleWeb could allow cycles of objects containing themself by using Public Keys as variables, a list set or map containing the Public Key, and the Public Key later broadcasting a pointer to that list set or map. That doesn't interfere with the secure-hashing so it works.

We could create a JVM on top of MerkleWeb, but it would be very complex. Instead, implementing the Scheme language is closer to what MerkleWeb naturally does. But before integration with other languages, we should create a new language from the bit and NAND level up, using lists, sets, maps, and other simple data-structures and calculations, as a proof-of-concept. Then on top of that, create the examples of how to use MerkleWeb to do http://en.wikipedia.org/wiki/Software_design_pattern

Why help? Bitcoin, at its best so far, was a 100 million dollar economy, created out of nothing, taking value from all dollars and moving that value into bitcoins. 100 million dollars is small compared to the trillions in the total Earth economy. I mean no disrespect to Bitcoin since its a very innovative and useful program, but at its core Bitcoin is a calculator that does plus and minus and a spread out way of creating the numbers based on "proof of work". Bitcoin is a calculator. MerkleWeb will be a computer. MerkleWeb is based on the Bitcoin security model and other similarities, but far more generally useful. If Bitcoin can move 100 million dollars, then MerkleWeb can move trillions of dollars. If you or your business helps develop the MerkleWeb prototypes and/or ideas, or any other way you know to help, then when it gets big your investment will pay off as people know you helped create it and will be very interested to work with you on other big projects. All the MerkleWeb design documents I've written at http://sourceforge.net/projects/merkleweb and here in this thread are public-domain, which means I'm not taking a cut and nobody owns it and everyone has the right to use it, but still its an investment that is likely to indirectly move money toward anyone who helps in a big way, and will improve the world overall. "Unity mode" for the whole Internet, like VMWare did for programs of different operating systems on 1 computer at a time, is a game-changer. Will you help?
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
I trust you have a grasp on these low-level implementation details, which if possible might as well be magic to me until I can sink my teeth into some tech specs. Now, for higher level concerns:

Suppose someone decides to upload their smut collection into the address space. What incentive does anyone else have to host the data? What means do they have to remove this data locally? Can nodes hold incomplete portions to be recreated by the requester? If every node purged specific data, would the network become invalidated?

Suppose someone uploaded a service. Would its availability be proportional to its popularity? (something like reinforced weighting of paths within a neural network [each node hop adds data to a leaky bucket proxy cache]).

Suppose someone uploaded false slanderous material. Could it be removed, modified? How could we address the 'latest' version of some material? (I assume data does not change in time and space)

Do you have thoughts on purely off-network code execution? Rather than defining an interpreter build up from MerkleAxioms (NAND, 321-bit addresses), a block of data could declare "this is an ambiguous version of javascript" followed by code including predictable public global variables (same address space, sequential time space). As long as some node runs the code locally and writes the outputs, there isn't much to verify. We can argue about whether it is truthy, but it's no longer deterministic as far as MerkleWeb. Would that be a problem? What prevents MerkleWeb from becoming primarily a global public copy-on-write versioned filesystem? Could that swamp the benefits or would it only make the network more robust?

If I recall from ZFS, addressing the full 128-bit address space would require more energy than required to boil the oceans. Does MerkleWeb require vastly more space to minimize random allocation collisions? Furthermore, if you expect the time component to represent cycles (rather than seconds or blocks), perhaps 64-bits is not sufficient? Running a trillion calculations per second is less than a year, or a few hundred years at a billion calcs per second.
sr. member
Activity: 316
Merit: 250
Reminds me of something MerkleWeb is partially a continuation of, "An Idea Worth Spreading - The Future Is Networks", and the idea is spreading.
http://emergentbydesign.com/2010/03/16/an-idea-worth-spreading-the-future-is-networks

Social networks of people. Merkle Tree networks of calculations and data storage. Open source networks of software. Mind maps linking ideas together. MerkleWeb could bring it all together in a public-domain unbiased way.
legendary
Activity: 3066
Merit: 1147
The revolution will be monetized!
This is an idea worth watching. One can envision many uses for such a system. Great job man!
sr. member
Activity: 316
Merit: 250
Speculative evaluation (prediction and rollback if wrong) is needed if cycles run faster than it takes electricity to move from one side of Earth to the other.

I wrote in the first post:
Quote
here's the rule that makes it practical for realtime speeds: Since we only store MerkleTrees that, proven to exponential certainty, have no contradictions between their child MerkleTrees recursively, we only need to keep track of which child MerkleTrees are compatible with which other MerkleTrees, and search any MerkleTrees we receive from the peer to peer network recursively comparing their childs to the childs we already have, and use only the parts of them we can prove, to exponential certainty, do not contradict eachother.

That's how it works if all computers calculate the whole network. Loosening that constraint, and adding more reliability by continuously trying to deceive the network to make sure the network can handle it, we can have each computer only run a small fraction of MerkleWeb's calculations. I later wrote:

Quote
Since my design requirement is that MerkleWeb will still be secure even if 99% of the computers have been hacked into, I add the following to the MerkleWeb specification: All computers will lie in half of their broadcasts. Half will be what each computer honestly thinks is accurately part of the next state of MerkleWeb. The other half will be very close to that except it will have some bit, or multiple bits, set incorrectly, in a way that is very deceptive and hard to verify is wrong. That way, when a hacker gets into MerkleWeb, its no big deal, since all of MerkleWeb is already doing the same thing, has been expecting it and learned to overcome it.

Why would that work? Each contradiction added to a branch (like SVN has recursive branching) of MerkleWeb causes that branch to run anywhere between normal speed and half speed. On average, as the number of contradictions in a branch of MerkleWeb increases linearly, the speed of that branch will decrease exponentially.

So its simple to figure out which branches of MerkleWeb have contradictions... broadcast half of your communications honestly and the other half having some hard-to-detect error, and see which of your communications come back to you after they've flowed around the network for some time. Statistically more often, trust the communications that come back with your honest broadcasts, and distrust communications that come back with your dishonest broadcasts, because dishonesty will be exponentially time-dilated until it falls into a black-hole of contradictions that can't be calculated at any practical speed. In MerkleWeb, hackers fall into black-holes of time-dilation.

So yes the design does allow Denial of Service attacks in some branches, but other branches where such DoS attacks do not happen will also exist and will be much faster so most of the network will continue on those paths. To allow that, I add to the MerkleWeb design: Any state of the system where a DoS attack is possible in the future is an error state, therefore all previous states of a DoS state are errors and must be traced back to an incorrect NAND calculation. This will require that the "driver" of each MerkleWeb be written in a way that does not allow DoS attacks, else that driver will be automatically rolled back and force programmers to write a better driver that does not allow DoS attacks or "deadlocks" (as I originally wrote this strategy for).


In general, the cost of dealing with these things is MerkleWeb will run approximately log(number of bits in MerkleWeb) times slower, the cost of a binary search through Merkle Trees to find which trees are compatible with which other trees using local caches of the chances of such compatibility per Merkle Tree node. It's not exactly a binary search, but the calculation is something like that which compares the compatibility with each Merkle Tree and recurses to subsets of childs sometimes. For example, if there are 10^15 bits (amount of information in a Human brain) in a MerkleWeb network, it would run approximately 50 times slower than the normal way of running programs, since 2^50 is approximately 10^15. That 50x slowdown will be far more than paid for by new kinds of optimizations and the ability to scale up to any number of computers in the same peer to peer network, including grids of GPUs (in graphics cards) and wireless mesh networks and cell phones and other devices. That's the cost of protecting MerkleWeb from DoS attacks and keeping the network state globally consistent.

MerkleWeb is very different from how the Internet works today. Its other protection against DoS attacks, which so far have all been at a certain Internet address or part of the Internet, is that MerkleWeb has no addresses for specific hardware. There is 1 global 320 bit address space that represents CPU registers, memory, hard drive, network storage, packets flowing through routers, and all other patterns of information. There is no way to target a certain part of MerkleWeb with a DoS attack. Only the content on MerkleWeb can be targeted, and it's all spread across the Earth.
Pages:
Jump to: