Pages:
Author

Topic: MerkleWeb - statistical Godel-like secure-but-not-perfect global Turing Machine - page 2. (Read 12245 times)

sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
Quote
Quote
Using this strategy, it may be practical to only verify 1 in every million bits, as long as they're randomly...

Verifying data (with hash checksums) is trivial. Verifying x inputs, y algorithms, and z results is not trivial.

Changing the world is rarely easy, but MerkleWeb is worth the effort.

A bitcoin reorg occurs because of rare race conditions or expensive malicious attack. But shouldn't lazy evaluation produce situations that will make rollback likely, more frequent, and to unpredictable depth?. Intensionally producing inconsistent state (Gödel) would be an obvious DDoS attack vector, no?
sr. member
Activity: 316
Merit: 250
I believe trying to synchronize each individual cycle of all threads to a unique sequential index will be prohibitively slow, hardly useful for even highly specialized scenarios. You can't seriously expect 'real-time' interrupts anyway.

Instead, I propose a much larger sequence of cycles per block, perhaps hundreds (or millions) of cycles. Then exactly like bitcoin, each node atomically submits a pre-computed bunch of bits as a transaction, which other nodes can verify. Bundled with other transactions, blocks are added to the public blockchain. Only the blocks are synchronized and only blocks increment the 2^63 time rows. And just like bitcoin, blocks can not confirm much faster than a few minutes otherwise your latency is greater than your error/synchronization rate, and the network will spend most of its resources thrashing.

We're thinking mostly the same thing, except I want to count 1 million time rows if there are 1 million cycles per global heartbeat, so if an error is found it can be rolled back and recalculated at the cycle granularity instead of the global heartbeat granularity. Its important to be able to roll back 1 level of parallel NANDs at a time. Rolling back NANDs normally has exponential complexity, but since we have snapshots of the past we can do it like a binary-search in Merkle Tree space and actually do it as jumping back and rolling forward, which has linear complexity, as I explained in the first post as "bloom filter".

Quote
How you get from NANDs and BITs to registers, addresses, words, gotos, loops, comparisons, stacks, and arithmetic is beyond me, but I trust self-hosting/bootstrapping has been done for decades and can be done again.

I can build that, but I don't know when I'll finish. I got this far by designing it like a work of art and will continue that way. Maybe somebody with massively parallel computer hardware design skills will help later. I'm more of a software and math person, but I can figure it out. There's alot of overlap.

Quote
I posit that you can not have both synchronized cycles and voluntary execution.

...

On the other hand, if you expect all cycles to be synchronized, then you must require that for each delta bit in the entire address space, at least some node actually executes/verifies it (which is probably impossible).

You're right, but I don't mean it that way. When I say 1 of 2^63 possible time cycles, each 1 row of up to 2^256 virtual bits, I mean simulated time which can be calculated later as long as such delayed calculation has no effect on the timeline. Example: There may be a variable x which is set to 5 at time cycle 1000, but you don't know its 5. You set variable y to x+1 at time cycle 2000. You know you're going to read the variable y at time cycle 2010, so you have 10 more cycles to calculate x and what happens in the next 1010 cycles just before you read y. This is http://en.wikipedia.org/wiki/Lazy_evaluation and is a common optimization that allows things to be calculated only when or if needed. Other parts of the system may have read and written x and y, and that may depend on some permissions system we would build on top of the public-key system which is built on top of the NAND system, so network communications are part of the lazy-evaluation and may push the evaluation farther into the future before the true state of the system can be known, but it will converge because of the "bloom filter" as explained in the first post of this thread.

As the technical documents and this thread already describe how to build these parts, MerkleWeb is Turing Complete http://en.wikipedia.org/wiki/Turing_complete and massively multithreaded http://en.wikipedia.org/wiki/Parallel_computing and database-strength concurrent integrity of data http://en.wikipedia.org/wiki/Database but can't Deadlock http://en.wikipedia.org/wiki/Deadlock because at the time of deadlocking it will roll back to a previous state (as close to the deadlocked state as possible) and try absolutely every possible permutation of the system until it finds one that does not deadlock, with optimizations for which paths to search first so it runs fast. It may run incredibly slow if you try to deadlock it, but it will never get stuck.

The rule for avoiding deadlocks is very simple. Any state of the system that can possibly lead to a deadlock is an error. Therefore all previous states of a deadlocked state are errors and must be rolled back until an incorrect NAND calculation is found, and then recalculate that NAND and proceed on the correct path (while this does leave the "choice bits" in an uncertain state and I'm not sure what to do about that, but in the worst case we leave them as unchosen).

Quote
Verifying data (with hash checksums) is trivial. Verifying x inputs, y algorithms, and z results is not trivial.

Changing the world is rarely easy, but MerkleWeb is worth the effort.

Quote
It seems to me that each block (time slice) must be verifiable in its entirety by any single node (just like all bitcoin transactions in a single block need to be verified by the rewarded miner).

It will be. Any part of the system, including past states and SVN-like branches with their own past states that branch and merge, can be requested by any computer as the 320 bit address of a Merkle Tree root whose leafs represent that state of MerkleWeb. That is exponentially optimized by each computer remembering things about many Merkle Trees between the roots and leafs of the many Merkle Trees (a merkle forest).

But that's not the difficulty. We need a way to make sure every part of the network is verified, directly or indirectly (as I explained above with the "deceptive" bits), by enough computers that we can be certain at least 1 of them is not an attacker.

Quote
It's the past history that we can relax. We can assume that since no one has complained thus far, then we can be confident that the cumulative hash value of the previous ten years up to last month is probably true, but we still need to hash each current state in its entirety and hash it again against the previous cumulative state.

Yes, like in Bitcoin.
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
I believe you would need 'big heartbeats' of arbitrary (though constrained) duration.

Not duration. It has to be a predictable number of cycles per global heartbeat so the computers can align each of the 2^63 possible time rows. ... ... ... unless its tuned toward lower accuracy for more speed.

I believe trying to synchronize each individual cycle of all threads to a unique sequential index will be prohibitively slow, hardly useful for even highly specialized scenarios. You can't seriously expect 'real-time' interrupts anyway.

Instead, I propose a much larger sequence of cycles per block, perhaps hundreds (or millions) of cycles. Then exactly like bitcoin, each node atomically submits a pre-computed bunch of bits as a transaction, which other nodes can verify. Bundled with other transactions, blocks are added to the public blockchain. Only the blocks are synchronized and only blocks increment the 2^63 time rows. And just like bitcoin, blocks can not confirm much faster than a few minutes otherwise your latency is greater than your error/synchronization rate, and the network will spend most of its resources thrashing.


Quote
We can't use any high level programming abilities without building them first. It is circular-logic, and that's why the requirement is that we build the first layer as a quine.

You've sold me on the elegance of the absolute fewest axioms. Nearly all applications will first shebang some interpreter (or decompressor) followed by a script and data. How you get from NANDs and BITs to registers, addresses, words, gotos, loops, comparisons, stacks, and arithmetic is beyond me, but I trust self-hosting/bootstrapping has been done for decades and can be done again.


Quote
Quote
Inclusion in a block asserts valid code execution and resultant state. However, unlike bitcoin, I don't think the MerkelWeb needs to verify the ENTIRE address space.

Each computer in MerkleWeb will only verify a small part of the address space by broadcasting their best knowledge of the correct state of part of the address space and other times broadcasting that same state but with some incorrect bits.

I posit that you can not have both synchronized cycles and voluntary execution.

If every submitted transaction represents some execution performed by a node voluntarily, and a block represents a collection of transactions verified by some miner voluntarily, then you can synchronize only those voluntarily executed blocks of cycles.

On the other hand, if you expect all cycles to be synchronized, then you must require that for each delta bit in the entire address space, at least some node actually executes/verifies it (which is probably impossible).

I don't see any way around it: Unloved code can not be synchronized. If no node bothers to execute a thread, it is a cryogenic zombie and can not be retro-actively executed.


Quote
Using this strategy, it may be practical to only verify 1 in every million bits, as long as they're randomly

Verifying data (with hash checksums) is trivial. Verifying x inputs, y algorithms, and z results is not trivial.


Quote
but I'm not sure what the scaling cost is for each computer running only a small fraction of the program and still guaranteeing no bit or NAND calculation will ever be done wrong and assuming most of the network may be hostile computers. If we relax some of these constraints, it will scale better. There's lots of tradeoffs to tune.

It seems to me that each block (time slice) must be verifiable in its entirety by any single node (just like all bitcoin transactions in a single block need to be verified by the rewarded miner). It's the past history that we can relax. We can assume that since no one has complained thus far, then we can be confident that the cumulative hash value of the previous ten years up to last month is probably true, but we still need to hash each current state in its entirety and hash it again against the previous cumulative state.
sr. member
Activity: 316
Merit: 250
I believe you would need 'big heartbeats' of arbitrary (though constrained) duration.

Not duration. It has to be a predictable number of cycles per global heartbeat so the computers can align each of the 2^63 possible time rows, or in the 1024 bit version, each of the 2^511 possible time rows (no year-2000 type bugs here to fix, and we get to have constant bit strings or files exceeding 1 googolbit, which can be derived at runtime instead of stored). We would obsolete the 320 bit MerkleWeb, but the 1024 bit version will last us all the way up to a technology singularity since its address space is big enough to represent every particle/wave in the entire part of the universe we've seen so far which is billions of lightyears wide, and SVN-like branching of all that, if we knew how to program that into it and if we had enough hardware to run all those calculations and data storage on. Let's plan ahead.

Also, I extend my theory that physics may operate similar to MerkleWeb with the following: http://en.wikipedia.org/wiki/Parity_%28physics%29 parity like one direction of rotating electricity causes perpendicular electricity through the circle one direction instead of the other, may be caused by something like the bit rotations and other nonsymmetric math in http://en.wikipedia.org/wiki/Sha512 or other secure-hash algorithms, but in a more analog way like we observe in qubits.

Back to practical technical stuff...

Quote
I should be able to set some data and code in time and space, and then after sufficient validation, to share that data and code as a library.

Such libraries would be stored the same way as emulated hardware definitions, as bit strings in the constants section. We need some way to apply its 320 bit address to another 320 bit address like a function call, and it needs to be defined in terms of NAND. We can't use any high level programming abilities without building them first. It is circular-logic, and that's why the requirement is that we build the first layer as a quine. http://en.wikipedia.org/wiki/Quine_(computing)

Quote
Using the (three?) axioms: BIT, NAND, and a 320-bit POINTER, I begin by defining a basic assembly language with XOR, IOR, AND, NOT, multiplication, bit shifts, loops, comparisons, etc.

Ok. The standard low level calculations from programming languages.

Quote
This would require enormous amounts of data (because each address is 1/3 KiB - I think you need much smaller relative address 'words').

No. You only spend the data to create each operator as a function once. Then you create more complex functions using those simpler functions. Each function can cost many bits to create, but once they're created, calling a huge function to do many calculations is very cheap at some small multiple of 320 bits. If it was run completely in emulated mode, it would be that slow, but more frequently used parts of MerkleWeb could, for example, compile down to Java bytecode at runtime, so they use 32 or 64 bit native addresses and native integer and floating point math. There are many possible optimizations.

Quote
I would expect its inclusion in the MerkelWeb to be atomic (like your version control analogies).

Yes. The whole system will have the concurrency ability of a database, by default, unless its tuned toward lower accuracy for more speed.

Quote
Each heartbeat is synonymous with a verified bitcoin-esque block consisting of numerous cycles.

...

it takes a few blocks to unambiguously confirm global state.

Yes. The number of heartbeats in the time it takes information to travel from all computers in the network to all other computers is the average verification window.

Quote
Inclusion in a block asserts valid code execution and resultant state. However, unlike bitcoin, I don't think the MerkelWeb needs to verify the ENTIRE address space.

Each computer in MerkleWeb will only verify a small part of the address space by broadcasting their best knowledge of the correct state of part of the address space and other times broadcasting that same state but with some incorrect bits. As the 2 versions (1 true and 1 deceptive) flow through the network and eventually come back, statistically new data containing the locally-true-broadcast will come back more often than new data containing the locally-deceptive-broadcast. This is because contradictions get rolled back, and deception causes contradictions, so the correctness of calculations can be measured by how well they survive as they flow through long paths in the network. We don't need to know what paths they took through the network, just that they come back in many different Merkle Trees that each cover a different subset (overlapping eachother in many ways) of the address space. You're right that the entire address space doesn't need to be verified. Its verified indirectly.

Using this strategy, it may be practical to only verify 1 in every million bits, as long as they're randomly chosen in many different ways by many different computers and verifying many different SVN-like branches of the network to cover cases where certain code does and does not run.

I said earlier "the algorithm is nonlocal, exponentially error tolerant, and scale-free." That's true if every bit is verified and all computers run the whole program, like all computers in a Bitcoin network (at least in earlier versions, did they change that yet?), so any number of computers can agree on a shared calculation that way, but I'm not sure what the scaling cost is for each computer running only a small fraction of the program and still guaranteeing no bit or NAND calculation will ever be done wrong and assuming most of the network may be hostile computers. If we relax some of these constraints, it will scale better. There's lots of tradeoffs to tune. For a global voting system, its a small enough number of calculations we can verify all the bits, and that was the original use case.

Quote
Perhaps you might add a third dimension, addressing isolated MerkelBranches, each with its own manageable blockchain. Personally, I don't want to spend a single one of my local cycles thrashing on some idiot's infinite loop.

Each MerkleWeb network can have its own way of allocating computing resources. Supply and demand will push it toward you get as much as you put in, so that won't be a problem.

Quote
I prefer a DVCS analogy. I can arbitrarily clone and pull from any MerkleBranch (repo). MerkleBranches can reference across other MerkleBranches.

Yes. That's a core feature of MerkleWeb.

Quote
In theory my fibonacci routine, once executed will run forever, unless it segfaults on pre-allocated finite memory. However, I rather think that some node must actively execute the code and submit the state as a transaction, otherwise if no node applies cycles, a program halts indefinitely.

...

execution should be every node's prerogative.

In an open source system, nobody can force computers to continue calculating anything they don't want to. Of course they could all agree to calculate something else, but if breaks the rules defined in the NANDs and previous bit states, that would not be a MerkleWeb. Only "choice bits" can change the path of calculations. If you're worried about not having a preferred future available, build in a MerkleWeb-kill-switch made of choice bits that a majority-agreement of public-keys (as defined in the NAND logic) can activate to change the path. That way, when all those people and their computers choose to calculate something different, its still a MerkleWeb and is just another branch.

Quote
What's the incentive? https://bitcointalksearch.org/topic/bitcoin-as-computational-commodity-49029 Verification of code and execution state included in a block is undeniable proof of local resource allocation. The miner can be rewarded with computational and storage credit, which like bitcoin can be used as fee, to inject new code and state into the MerkleWeb.

It would be useful to have a lower level similarity to Bitcoin than a "driver" (as I wrote above) written on top of the NAND level. Allocation of computing resources is important enough that we maybe should make an exception and put it at the core parallel to the importance of NAND and bits, but only if it can be done without complicating the design much. As a driver, it would add the complexity as the need for more optimizations, but the core would be simpler which is very valuable.

I read that thread. Its good strategy to boot up this system as a way to buy and sell (or for purposes of avoiding the obligation to pay taxes, the moneyless trading of) anonymous computing power and network storage. Lets do it.

How should we get started?
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
I believe you would need 'big heartbeats' of arbitrary (though constrained) duration. I should be able to set some data and code in time and space, and then after sufficient validation, to share that data and code as a library. Using the (three?) axioms: BIT, NAND, and a 320-bit POINTER, I begin by defining a basic assembly language with XOR, IOR, AND, NOT, multiplication, bit shifts, loops, comparisons, etc. This would require enormous amounts of data (because each address is 1/3 KiB - I think you need much smaller relative address 'words'). I would expect its inclusion in the MerkelWeb to be atomic (like your version control analogies).

Each heartbeat is synonymous with a verified bitcoin-esque block consisting of numerous cycles. Inclusion in a block asserts valid code execution and resultant state. However, unlike bitcoin, I don't think the MerkelWeb needs to verify the ENTIRE address space. Perhaps you might add a third dimension, addressing isolated MerkelBranches, each with its own manageable blockchain. Personally, I don't want to spend a single one of my local cycles thrashing on some idiot's infinite loop. I prefer a DVCS analogy. I can arbitrarily clone and pull from any MerkleBranch (repo). MerkleBranches can reference across other MerkleBranches.


Quote
Is each 'transaction' essentially a single memory space, or stack, and entire execution history recorded as a merkle tree since some earlier verified (fork) point in time/memory?

Yes, like a simplified version of http://en.wikipedia.org/wiki/Apache_Subversion that versions and branches the whole memory and computing state and virtual hardware design at the NAND level.

I imagine a transaction as a single node's assertion of code and data. I might allocate 100n merkleBits of a fibonacci calculator and submit that code as a single atomic transaction to the MerkleWeb. Since it is static code/data, it should have no trouble getting verified and included in a MerkleBlock by peer miner nodes. Next, I set a 'thread pointer' to the top of my routine and calculate twenty iterations of the series and submit the state and output register as my second transaction. Now a peer miner would need to re-execute my code and verify that yes indeed, he agrees with the state, before including the data, code, and execution state into another block.

In theory my fibonacci routine, once executed will run forever, unless it segfaults on pre-allocated finite memory. However, I rather think that some node must actively execute the code and submit the state as a transaction, otherwise if no node applies cycles, a program halts indefinitely.

What's the incentive? Verification of code and execution state included in a block is undeniable proof of local resource allocation. The miner can be rewarded with computational and storage credit, which like bitcoin can be used as fee, to inject new code and state into the MerkleWeb.


MerkleWeb will handle programs that do not halt (including infinite loops) by running them 1 cycle at a time.

I disagree with this. As written above, I believe execution should be every node's prerogative. A thread is indefinitely halted if no node bothers to continue executing it (or pays credits for other nodes to execute it). Each transaction is an assertion of an arbitrary number of cycles and/or injection of data/code.


Theres still some things to figure out about how to rollback [...] It always runs in debug mode.

Quote
How do we reconcile multiple executions of the same 'program' with different inputs/race conditions?

If any 2 MerkleTrees are both thought to be probably correct by the local computer (or wireless Turing Complete router), then it must trace the contradiction back [...] This is the part that's similar to quantum physics. [...] "different inputs/race conditions".

This is perfectly analogous to a double-spend, mining blocks, and reorg. There is only one longest chain of events, but it takes a few blocks to unambiguously confirm global state. It would require enormous computational power to change reality, "deja vu" Matrix-style.
sr. member
Activity: 316
Merit: 250
Should we understand the MerkleWeb as a shared memory heap?

Most software divides memory into mostly stack and heap. In MerkleWeb, there is no such division. Code, data, and control-flow are done through many http://en.wikipedia.org/wiki/Continuation implemented as acyclic forests of Merkle Trees where the leafs specify a subset of address ranges and their contents as 0s and 1s. Other data in the constants section includes NAND definitions of the logic to be implemented between snapshots of the bit states, or any other data you want to put in MerkleWeb.

Using this strategy, a large word doc, for example, would compress down to a little more than the buttons you pushed to create it, minus their similarity to the buttons other people push to create similar word docs.

The memory space is a 2d matrix 2^256 bits wide and 2^64 bits in the time dimension. The first 2^63 rows are time cycles. The first cycle of the network starts at row 0 and proceeds in units of "global heartbeats" (or many cycles inside each "global heartbeat" which is faster but harder to sync) which all computers in a MerkleWeb time synchronize on. The next 2^63 rows are for constant bit strings. Every bit in every constant bit string has its own address. Like strings in C (for byte addresses), the address of a constant bit string is the address of its first bit. The address of bit b in a bit string with SHA256 hash h is the bits of h then the bits of b as a 320 bit address. The addresses in the Turing Complete section and the constants section can each point into the complete address space.

Quote
Does the MerkleWeb have 'blocks' a la bitcoin?

All addresses are to a specific 1 of 2^320 virtual bits. Its not byte or int addressing like many hardwares do. In that system, blocks are addressed by their first bit and extend forward in the time dimension until they roll over to time 0.

MerkleWeb doesn't have a specific block data format. It supports all possible kinds of Merkle Trees in the constants section, where they are made of some combination of 320 bit addresses (or compression of them?) and each have a constant 320 bit address. Bitcoin's existing blocks could be mapped into this address space using a map of the kinds of secure-hash addressing Bitcoin uses to SHA256 secure-hash addressing. The simplest kind of Merkle Tree would be 640 bits, 2 pointers of 320 bits each, and its leafs would be a different data format to each specify an address range (often partially overlapping other Merkle Trees' address ranges) and its contents as 0s and 1s.

Quote
Is each 'transaction' essentially a single memory space, or stack, and entire execution history recorded as a merkle tree since some earlier verified (fork) point in time/memory?

Yes, like a simplified version of http://en.wikipedia.org/wiki/Apache_Subversion that versions and branches the whole memory and computing state and virtual hardware design at the NAND level. All that is a state of the program, and since its all virtual, MerkleWeb is completely stateless. Even the hardware it runs on (whatever it is) must be defined in terms of NANDs in a constant bit string (whose address is used as the name of that calculation), so the specific kind of hardware(s) is part of the state and will sometimes change between 2 "global heartbeat" cycles, like if an evolvable hardware system printed new NAND gates onto itself in the middle of adding 2 numbers, or if MerkleWeb is told how to use your CPU's native ability to add numbers, so the last half of calculating that math is done more efficiently. All these things are transactions and can be done in hardware if its there or fall back to software emulation if the hardware fails in the middle of the transaction.

Quote
Any node can voluntarily recreate any execution history?

Yes, on the condition that the network is still storing the Merkle Trees whose leafs are that part of the history. Which MerkleTrees exist at a certain time are also part of the history, so they can point at eachother in many overlapping ways (sometimes and mostly for older more important parts of the history, to add more reliability to handle the possibility of SHA256 being cracked) and into the Turing Complete section.

Quote
Does each program/merkle tree run until exit, or can its entirely deterministic state and history be recorded in a block mid-execution?

Mid-execution. MerkleWeb will handle programs that do not halt (including infinite loops) by running them 1 cycle at a time. Each MerkleWeb network will have a constant integer number of cycles to run each Global Heartbeat. Example: if MerkleWeb is running mostly on grids of GPUs (like many people use for mining Bitcoins), then that number of cycles would be the approximate number of graphics card cycles done in the time it takes to do an average network hop, minus a little extra time for the slower hops to catch up and to rollback and recalculate any who do not stay in sync.

It doesn't just store the previous state. It stores many previous states, knowing some part of some states and filling in the gaps from the NAND definitions of how calculations must proceed, except for "choice bits" which have no deterministic definition since they are for user interaction. Theres still some things to figure out about how to rollback "choice bits" (does it erase to unknown, stay the same, predict what the user would do in the new state, or some combination?). Every state is actually many states at different times and addresses. Continuing from a state "mid-execution" or an overlapping combination of many states is a http://en.wikipedia.org/wiki/Continuation MerkleWeb is a debugger of MerkleWeb. There is no production or normal mode. It always runs in debug mode.

Quote
How do we reconcile multiple executions of the same 'program' with different inputs/race conditions?

If any 2 MerkleTrees are both thought to be probably correct by the local computer (or wireless Turing Complete router), then it must trace the contradiction back, on a path of NAND calculations and previous snapshots of parts of the address space, to a state of MerkleWeb where no contradictions existed. Any parts of the program needed to do that which it doesn't have can be requested by their addresses and verified with SHA256 and their bit length to be at the requested 320 bit address if they are in the constants section, and if they are in the Turing Complete section they are calculated from the smaller amount of information given in the constants section about the NAND calculations and previous states. Rollbacks will flow through the network until all contradictions are eventually solved, except there will always be many new contradictions in the last few seconds.

This is the part that's similar to quantum physics. http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser is a more complex form of the quantum double-slit experiment where observing which of some future paths a particle/wave takes causes a wave-interference pattern (after an earlier double-slit) to change to more like the sum of 2 bell-curves (no wave interference). How could what happens in the future determine the past of a different branch? One of my theories is reality operates a little like MerkleWeb, rolling back changes on mostly the same path (of network hops in MerkleWeb or in physics I'd say networks of peer-to-peer associated "virtual particles") they proceeded when a contradiction is observed. Since Internet wires or wireless routers have some maximum speed (which is the speed of light in the case of wireless routers since they transmit light), accessing "choice bits" or other unknown information that are written farther away than information could travel there in time, causes a "race condition", which causes global rollbacks and recalculations of all affected calculations (but not of anything which wasn't affected, so it scales up well), which would appear to a simulated life form in such calculations that the other simulated thing which is moving faster than light is moving backward in time, and when it moves near the speed of light it gets time-dilated toward not experiencing time at all. Also similar to relativity, mass increases toward infinity as speed approaches light speed, if we define number of calculations needed to rollback and recalculate in the whole network as mass. That is only statistically true since going faster than light tends to create contradictions which would have to be rolled back to whatever created them (which would be the rule in the program that allows you to communicate faster than light in a way that creates contradictions) and we'd have to take a different Continuation path. This suggests that in real physics, depending on how similar to physics MerkleWeb really is, that we can move some things faster than light as long as we don't create any contradictions. Therefore MerkleWeb is similar to relativistic quantum physics. That's how MerkleWeb deals with "different inputs/race conditions".
sr. member
Activity: 322
Merit: 251
FirstBits: 168Bc
Should we understand the MerkleWeb as a shared memory heap? Is each 'transaction' essentially a single memory space, or stack, and entire execution history recorded as a merkle tree since some earlier verified (fork) point in time/memory? Any node can voluntarily recreate any execution history? Does the MerkleWeb have 'blocks' a la bitcoin? Does each program/merkle tree run until exit, or can its entirely deterministic state and history be recorded in a block mid-execution? How do we reconcile multiple executions of the same 'program' with different inputs/race conditions? Have I processed an invalid path of confusion?

As for monetization/incentive: processing cycles, memory storage, and bandwidth are each quantifiable and valuable commodities.
sr. member
Activity: 316
Merit: 250
MerkleWeb will be an open source tool that anyone can use to do whatever they want. There will be many competing MerkleWeb networks which will probably eventually converge into 1 big MerkleWeb network that runs all the others as software controlled by whatever democraticly chosen rules we globally agree on. The ability to create your own network if you don't like existing networks will keep the total value of each MerkleWeb positive, unlike existing governments and other organizations which are forced on us in a monopoly way. In this environment of many competing MerkleWebs, some will have private-keys associated with certificate-authorities (like those that web browsers come with built in), peer-to-peer "web of trust" systems of collectively agreeing on whose private-key is associated with identities or other things, optional USB identity sticks given out with government ID cards, or whatever other kinds of identity systems people come up with, or anonymous private-keys like Bitcoin uses. Private-key/public-key pairs are tools to build things with. As a general computer, MerkleWeb could use public-keys and the data published with the private-key like any other variable in software. All possible uses of private-keys.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
The only way to convince MerkleWeb to calculate anything different is with "choice bits" (as I wrote above), which are kind of like network ports mapped into certain global 320-bit addresses. Each choice bit can only be written by a certain private-key. Each group of choice bits has a public-key that everyone can see to verify the digitally-signed bits from the private-key.

Ok.  So who controls the private keys?
sr. member
Activity: 316
Merit: 250
For the decentralized voting, the anonymity is still a problem. Whoever distributes the votecoins can know exactly what each participant votes. I still don't see a way around it.

Encryption is made of NAND calculations, like everything else. MerkleWeb will also support private-keys that are not stored in the system (for digital-signatures, like in Bitcoin). My intuition is that some recursive combination of those will be able to generate privacy algorithms, but we'll have to think about it.

MerkleWeb will support emulators inside emulators to unlimited depth, with paravirtual optimizations (like VMWare uses to speed up emulated Windows for example). Recursive emulation levels will be as easy and efficient as recursive function calls since its designed in from the start. This is an unprecedented level of emulation flexibility. I think we can figure out a way to have privacy with the bottom layer of bits and NANDs being visible to everyone.

Even if it turns out MerkleWeb can't support any kind of privacy, isn't it better to have a working unhackable peer to peer voting system where everyone can see your private-key's votes, than a private vote where we have to trust governments with our votes and to tell us what we're allowed to vote on?


If you can find G. Spencer Brown's "Laws of Form" see if you can read it and make sense of it.

It is an excellent extremely minimalistic "form" that treats the empty page as "or" and "the mark" (the label, the distinction, that which makes distinguishable marks upon the page, that which is distinguishable from the emptiness that is take as "or") is taken as "not".

Thus if we use "being in parentheses" as "the mark", (a) (b) would represent (not a) or (not b).

He proves that all lemmas can be stated in forms not more than two deep, such as ((a)(b)).

It is a very useful notation because you can cancel things out very easily, and all that remains is the solution to the lemma.

My intuition is http://en.wikipedia.org/wiki/Laws_of_Form is closely related to NAND and MerkleWeb's core design, but I'll wait until I read more about it before answering your posts.

In a zen way, I think the kolmogorov-complexity http://en.wikipedia.org/wiki/Kolmogorov_complexity of the universe is 0. That means the universe contains no information overall, but every smaller subset of it contains information. "All integers" has more integers and less information than "all integers except 42". Similarly, the universe overall can contain less information than its parts individually. "The kolmogorov complexity of the universe is 0" is the simplest way to say the universe is all possibilities you can write in math, so its equal to Max Tegmark's "Mathematical Universe Hypothesis" (also known as Ultimate Ensemble multiverse theory).

In the Laws Of Form, "The kolmogorov complexity of the universe is 0" is the "void", "unmarked state", and "nothing". I also call it "unity". Its simultaneously everything (all possibilities you can write in math) and nothing since "everything" and "nothing" are isomorphic except for the labels "everything" and "nothing", and labels don't change content, so they equal. This is the fundamental paradox of all reality. laws_of_form_cross(laws_of_form_void) means everything except laws_of_form_void, but since unity (kolmogorov complexity 0) is simultaneously everything and nothing, everything except laws_of_form_void equals laws_of_form_void.

That's why consciousness exists. Consciousness (or call it the experience of reality or quantum wavefunction) is laws_of_form_cross, and unity (kolmogorov complexity 0) is laws_of_form_void. Supporting that view is this quote from Wikipedia and the book:

re: voting,

if a trusted third party is willing to sign your key to prove that you're a human being, you can participate in a particular vote. (i.e. the US govt (or my company) can have a known key. they sign your key. you authenticate to the voting server by decrypting a gpg-encrypted challenge string, proving ownership of your private key. if the government's key signature checks out, you're allowed to vote.

I don't see why we should involve governments in our elections. Democracy tells government what to do. Lets set up our own elections through the MerkleWeb secure global computer, figure out what people agree on in a way people can trust the free open source system, and then together we tell governments to do their jobs which is whatever democracy tells them to do.

MerkleWeb is not about any specific computing task or kind of elections, but you can build what you said if you like. MerkleWeb will be a general computer that runs on a network. Its about giving people an open source unbiased secure computing platform they can trust so we can experiment with many kinds of election systems and other ways of organizing the world.

For now, its more important to help design the low level details of the system than to speculate about what kind of voting systems people will build on top of it.


Neat idea!

I don't get the economics of it, though-- how do I convince the network to run/check the calculations I want the MerkleWeb to run rather than somebody else's?  And does anything prevent me from asking the MerkleWeb to do some useless busy-work that is just designed to chew up computing resources? Who decides what gets run on the Turing machine, when?

All calculations that run will be checked. Each MerkleWeb network will have the appearance of being single-threaded for the whole network while actually being massively multithreaded and using rollbacks of the relevant bits when predictions fail. Any contradictions cause recursive rollbacks to a state of the system where no contradictions exist. The calculations MerkleWeb will do will be well-defined.

The only way to convince MerkleWeb to calculate anything different is with "choice bits" (as I wrote above), which are kind of like network ports mapped into certain global 320-bit addresses. Each choice bit can only be written by a certain private-key. Each group of choice bits has a public-key that everyone can see to verify the digitally-signed bits from the private-key.

The ways we allocate computing power will have the flexibility of a Turing Machine. Each MerkleWeb network will need to have some kind of driver written to do such allocations, written on top of the NAND and bit level.

We could, for example, allocate global calculating power and memory through Bitcoin. Since MerkleWeb is based on a variation of the Bitcoin security model, we could map Bitcoin's existing history and user accounts into MerkleWeb's 320 bit global address space (in the constants section "Name constants by their SHA256 secure-hash" and 2^64 minus their bit length). It would be many small Bitcoin transactions, except running much faster than the existing Bitcoin network since no "proof of work" is needed. Instead, large amounts of actual work (NAND calculations and large functions made of NANDs) are verified by all computers sometimes throwing in some incorrect bits to see who catches their intentional mistake ("will have some bit, or multiple bits, set incorrectly, in a way that is very deceptive and hard to verify is wrong"). I said they will do that half the time, but that may be too much. We can tune it after its up and running and based on statistical proofs.

The simplest way to control MerkleWeb is a democracy of computing power can opt-out of any calculation they don't like by choosing not to continue that branch. MerkleWeb will be a SVN-like versioning and recursive branching system for all calculations and computing states and memory states of the whole network, at the bit and NAND granularity, storing snapshots of the bits sometimes but mostly using emulation as compression, since if you know a past state and the NAND definition of how calculations proceed, you can calculate the next states, and combine calculations between branches that have overlap or similarity through some translation function (It's Turing Complete), and get compression ratios so high they can only be expressed in scientific notation. This is possible because a Turing Machine compresses down to its state transition definitions and the data it starts with, regardless of how complex a program it becomes. Also, we must store the "choice bits" since they aren't necessarily deterministic.

The democratic opt-out ability through the SVN-like versioning and branching at the bit and NAND level is our protection for if anything goes wrong. We can always restore MerkleWeb to any previous state, unless we choose to not store (with unprecedented levels of compression as I explained above) all previous states.

The best way to keep the system consistent may be to create a simulation of the existing Internet infrastructure, not the whole thing but just the NAND behaviors of the routers and the distance and bandwidth of routers to other routers in a directed network, and to cap their bandwidth and delay others to keep everything running at the "global heartbeat" speed, because it has to be completely deterministic except for choice bits. By simulating the real bottlenecks, we can avoid allowing software to try to jump around faster than light through the real infrastructure, which would very much slow the system down as the real light speed limit is what happens and up to the rest of MerkleWeb (only the parts that affect or can be affected by the calculations that jump around too fast) gets time-dilated, but a least its 1 consistent system and we get the SVN-like versioning and branching and global security. As we create hardware designed specificly for MerkleWeb, we advance past those bottlenecks and don't need to simulate the bottlenecks anymore. We would instead simulate something more similar to light-cones representing the maximum speed light can move between routers in a wireless mesh network, more like a peer to peer liquid than a hierarchy with an Internet backbone. These are the physical constraints any software must obey.


To everyone...
MerkleWeb is a new idea and I'm figuring out the details as I go. Please help unify our technology and ways we organize the world by contributing your ideas to the MerkleWeb public-domain where everyone has the right to use them.

Theres lots of money to be made in creating "unity mode" for the whole Internet (VMWare got rich on single-computer operating systems, while MerkleWeb will be a seamless network nanokernel operating system), and like Bitcoin, MerkleWeb will be a mix of business and free open source (including GNU GPL software since it has an operating system exception) systems, but nobody will ever have to pay to join MerkleWeb since the core idea is open source.

If you want to help, the first steps are to implement integer math, 320 bit addresses, virtual CPU operations, address bus, virtual memory storage, peer to peer link layer transport strategies, and all through a quine of NAND logic that defines itself as a bit string in the constants section. The quine self-definition will be done the same way as "Supports evolvable hardware as paravirtual hook into global address space of SHA256 hash of NAND definition of new hardware." We have to keep it absolutely as simple as possible, or it won't work. NAND is already extremely complicated to do in a global network this way.

Anyone who has lots of money invested in Bitcoin should be selfishly interested, in a long-term way, to help the MerkleWeb project succeed, since its very similar to Bitcoin while much more flexible in what it can calculate (Turing Complete), so if MerkleWeb succeeds then Bitcoin will extremely increase in value and probably transform the global economy into mostly different kinds of cryptocurrencies, finally defeating the central banks' bottleneck on the Human species' progress.

If you want to improve the world and make lots of money in the process, MerkleWeb is a good long-term investment. I'm not taking a cut. Its public-domain. Will you help?


Also, in my meetup group http://meetup.com/Technology-Singularity-Detonator in Silicon Valley, I'm including debates and technical talks and hopefully later prototype development of MerkleWeb (and my AI research). Its only 21 people so far, and only a few have come, but its slowly expanding.
legendary
Activity: 2940
Merit: 1090
The vote tickets could be blinded cash, so the issuer can tick off that a ticket has been used to prevent its being re-used but cannot tell which issued ticket it was that just got ticked off.

Thinking about Laws of Form's representation where the default/empty space is take as or, compared to your NOT and AND system (NAND), I wonder if your representation would basically have space be taken as AND.

A possible advantage of OR over AND might be that if the nearby data for the OR is true, you don't care about the far away part, you can proceed in the knowledge that regardless of what the far away part says, the OR evaluates to TRUE, so you can go ahead without waiting to hear what the actual value of the far away part is.

With AND, if your nearby data input for it is FALSE you know the whole thing is FALSE, but unless your goal is to pursue falseness in preference to pursuing truth, that might not be as useful as being able to proceed on the basis of TRUTH.

That might end up with everyone having to pose all important questions to your universal machine in terms that make FALSE be the important result they are hoping for / pursuing.

Basically just a semantic thing but I think aesthetically a system designed to optimise the pursuit of truth might sound more appealing than a system that optimises the pursuit of falseness?

Hey, maybe run two universal turing machines, one built the Laws of Form way to pursue truth, another built your way to check the results / error-correct by pursuing falseness?

Maybe its all just semantic, afterall one could just reverse true and false any time one displays values/results to humans? Hmm...

Maybe space taken as OR and space taken as AND differ in "timelike versus spacelike" way(s)?

-MarkM-
legendary
Activity: 1372
Merit: 1002
re: voting,

if a trusted third party is willing to sign your key to prove that you're a human being, you can participate in a particular vote. (i.e. the US govt (or my company) can have a known key. they sign your key. you authenticate to the voting server by decrypting a gpg-encrypted challenge string, proving ownership of your private key. if the government's key signature checks out, you're allowed to vote.

But the voting is still public and the third party knows who owns which keys.
So the third party knows what each individual has voted.
legendary
Activity: 1652
Merit: 2311
Chief Scientist
Neat idea!

I don't get the economics of it, though-- how do I convince the network to run/check the calculations I want the MerkleWeb to run rather than somebody else's?  And does anything prevent me from asking the MerkleWeb to do some useless busy-work that is just designed to chew up computing resources? Who decides what gets run on the Turing machine, when?
sr. member
Activity: 493
Merit: 250
Don't trust "BBOD The Best Futures Exchange"
re: voting,

if a trusted third party is willing to sign your key to prove that you're a human being, you can participate in a particular vote. (i.e. the US govt (or my company) can have a known key. they sign your key. you authenticate to the voting server by decrypting a gpg-encrypted challenge string, proving ownership of your private key. if the government's key signature checks out, you're allowed to vote.
legendary
Activity: 2940
Merit: 1090
Nice exposition, thanks.

If you can find G. Spencer Brown's "Laws of Form" see if you can read it and make sense of it.

It is an excellent extremely minimalistic "form" that treats the empty page as "or" and "the mark" (the label, the distinction, that which makes distinguishable marks upon the page, that which is distinguishable from the emptiness that is take as "or") is taken as "not".

Thus if we use "being in parentheses" as "the mark", (a) (b) would represent (not a) or (not b).

He proves that all lemmas can be stated in forms not more than two deep, such as ((a)(b)).

It is a very useful notation because you can cancel things out very easily, and all that remains is the solution to the lemma.

It uses not and or instead of not and and so I guess would be using nor gates instead of nand gates, but any two such primitives suffices, its like whether to use one or zero as true or false, or whether to use NPN or PNP transistors maybe in a way. Anything that can be represented using one can also be represented using the other.

Its a pretty small book but very amazing contents, I expect you will find it quite interesting if it is not already part of the bread and butter that got your this far already.

-MarkM-
legendary
Activity: 1372
Merit: 1002
Thank you. Not sure but I think I understand it better now.

For the decentralized voting, the anonymity is still a problem. Whoever distributes the votecoins can know exactly what each participant votes. I still don't see a way around it.
sr. member
Activity: 316
Merit: 250
jtimon
Quote
I'm getting lost in your long explanations.
Can you post a summary without design details such as the size of the addresses?
Only to have a general idea about how this would work.

If you have a computer and software running on it that you trust both of (and you should never trust the existing infrastructure since getting hacked into now and then is accepted as an inevitable part of that system), then you could connect from that system into a MerkleWeb network by running a MerkleWeb server on that computer which you use as a proxy, through some local port, like through a web browser's proxy settings in the advanced options. You would put your private key(s) in that one MerkleWeb server, and it would digitally-sign everything you broadcast to MerkleWeb. It could also be done through a USB wireless network router we would design just for MerkleWeb, which would come with its own private-key/public-key pair built into the hardware and just work in any computer you trust which you plug it into. If there is a virus in that computer, it will have full control over what you do in MerkleWeb. MerkleWeb will not make existing technology more secure. Its a platform for building secure technology on top of which you may choose to access using less secure technology at your own risk, like Bitcoin is very secure but not if you run it on Windows and that virus designed just for stealing bitcoins gets onto your Windows. That virus can't infect the Bitcoin network, but it can infect your Windows and get to your bitcoins and nobody else's.

Quote
Maybe by just explaining how the voting application would work helps.
How do you prevent the authority that provides the certificates from knowing what each person have voted?

If you want PRIVACY in voting, a simple Bitcoin-like network could be setup inside MerkleWeb for that, or just use Bitcoin, to distribute voting credits (votecoins) to everyone who is allowed to vote on a specific question, and then spend their votecoins toward whatever they want to vote on. Its actually simpler than Bitcoin since you can only spend a votecoin once. Nobody needs to know the identity of who spent a votecoin to count the votecoins for each subject. The part of that Bitcoin will not do is distribute the votecoins to whoever they need to go to before the vote, and to set up various things to vote on. For tasks like that, and the other complexities that will come from the new society we're building in the Internet, a global secure peer to peer Turing Machine (general computer) is what we need.

MerkleWeb supports the NAND operation since all other logic can be derived from it. For PRIVACY, write an existing encryption algorithm (after making sure laws in your country allow that) as combinations of NANDs, and run part of MerkleWeb on top of that encryption layer, encrypted with your private key at the root of the encryption tree and form some complex network of encryption keys so while its all out in the open nobody except you can track down where all the encryption is coming from and going to... kind of like the Tor network, you could build depending on how much privacy you need. MerkleWeb doesn't care what software you run. Its only goal is to make sure its all globally consistent and the NAND calculations work. The calculations to hook in the public-key security (for digital-signatures) are also built on NAND calculations. All this can be optimized if hardware is available that does those exact NAND calculations, as we will find ways to align hardware and software after MerkleWeb is up and running and we can experiment with it. Its a Universal Turing Machine. It can calculate anything.

The whole MerkleWeb would be deterministic, and therefore highly optimizable, except for CHOICE BITS, which are where people connect into the system. Those bits can be 0 or 1, and can be proven only by the digital-signature associated with each group of choice bits. Choice bits slow the system down incredibly, so they should be compressed and used as little as possible.


markm, if you don't want to be predicted, go live by yourself deep in the woods. Getting to know someone means to predict them, and when they're gone you predict them but do not find the expected behavior, therfore you "miss" them, your prediction misses its target.

It will never get to the level of "every verifiable fact that ever will be verified, when it will be verified, who by, and what would that cause". Godel proved that's impossible.



Some technical stuff...

Since my design requirement is that MerkleWeb will still be secure even if 99% of the computers have been hacked into, I add the following to the MerkleWeb specification: All computers will lie in half of their broadcasts. Half will be what each computer honestly thinks is accurately part of the next state of MerkleWeb. The other half will be very close to that except it will have some bit, or multiple bits, set incorrectly, in a way that is very deceptive and hard to verify is wrong. That way, when a hacker gets into MerkleWeb, its no big deal, since all of MerkleWeb is already doing the same thing, has been expecting it and learned to overcome it.

Why would that work? Each contradiction added to a branch (like SVN has recursive branching) of MerkleWeb causes that branch to run anywhere between normal speed and half speed. On average, as the number of contradictions in a branch of MerkleWeb increases linearly, the speed of that branch will decrease exponentially.

So its simple to figure out which branches of MerkleWeb have contradictions... broadcast half of your communications honestly and the other half having some hard-to-detect error, and see which of your communications come back to you after they've flowed around the network for some time. Statistically more often, trust the communications that come back with your honest broadcasts, and distrust communications that come back with your dishonest broadcasts, because dishonesty will be exponentially time-dilated until it falls into a black-hole of contradictions that can't be calculated at any practical speed. In MerkleWeb, hackers fall into black-holes of time-dilation.

MerkleWeb can only practically verify calculations that have an effect on the future state of the system, so the security does not cover "quantum eraser" type of interactions. Because of the requirement to backtrack all observed contradictions, MerkleWeb will reproduce the behaviors of the "quantum eraser" and "delayed choice quantum eraser" experiments in quantum physics, but for Turing Machine calculations instead of quantum physics specificly, where calculations flow in waves, and have wave-interference with eachother, through hops in the network.

http://en.wikipedia.org/wiki/Double-slit_experiment#Delayed_choice_and_quantum_eraser_variations

Calculations that use pointers that access parts of the system farther away than the speed of network wires (which is limited by the speed of light) allows, risk backtracking to solve any resulting contradictions. As long as such calculations have been designed to keep the whole network consistent, there will be no resulting backtracking. But as CHOICE BITS move near the speed of internet wires (or the speed of light) their mass (total calculations needed to solve resulting contradictions) increases to infinity, they experience time dilation, and the whole network slows down to handle the increase in global contradictions, therefore MerkleWeb will put selection-pressure on the ideas of programmers to not slow the network down like that, to create consistent programs that do not move CHOICE BITS near the speed of internet wires. Relativistic speeds of CHOICE BITS are handled the same way as hackers. They time-dilate and fall into black holes, and faster branches of MerkleWeb continue without such inefficiency.

Network routing will be handled by statistical clustering of which parts of the program tend to affect which other parts of the program (whatever is running on the global Turing Machine), in a way thats kind of like holographic entropy-based gravity. As the NAND calculations and bit states of the system representing such NAND calculations propogate outward in circles (light cones) through network hops, and inward on those same circles, and back and forth in bell-curve patterns, any 2 parts of the program that are near but should become closer, will have their circles of influence overlap as they spread, so that information will come to the attention of other computers in the network at about the same time, first X then Y on the left side, and first Y then X on the right side, and as it reflects back toward the center, the holographic pressure will push them either toward eachother or apart, depending on their NAND relationships. I you sum 100 numbers which are each -1 or 1, then the standard deviation is 10, the square root of 100. Its the same thing how the circles of influence get bigger and smaller at the same time just by mostly random movements of the data transmitted. That's all I meant by holographic.

These behaviors, in this analogy between MerkleWeb and quantum and relativistic physics, are not extra parts of the system. They are what I expect from the simple NAND design as it applies to network hops in a peer to peer network as they try to synchronize the state of MerkleWeb globally. Its still the simplest nanokernel design, without 1 extra bit of complexity. Maybe physics also formed through very simple processes.
legendary
Activity: 2940
Merit: 1090
The novel "When H.A.R.L.I.E. was one" by David Gerrold seems very relevant to where this has been going.

18,000 stacked feet of printout for the board members to read in preparation for the upcoming board-meeting detailed the proposal for how "Human Analogue Robot Logic Induction Engine" could be useful (thus shouldn't have its plug pulled) by programming faster/better than human programmers predictive programs taking into account every verifiable fact ever verified, basically. (And of course proceeding on from that to every verifiable fact that ever will be verified, when it will be verified, who by, and what would that cause, etc etc the usual.)

Was there a "catch"? Read the novel and see... Wink

-MarkM-
legendary
Activity: 1372
Merit: 1002
I've been reading the proposal again but I still don't get it.
I know Godel, Turing and Merkle but I'm getting lost in your long explanations.
Can you post a summary without design details such as the size of the addresses?
Only to have a general idea about how this would work.

If this can really work, its implications are probably much bigger than bitcoin's.
A decentralized computer which its users can trust (unlike the cloud) to execute the algorithms they want but without making public its internal state at every step.

Maybe by just explaining how the voting application would work helps.
How do you prevent the authority that provides the certificates from knowing what each person have voted?

I want to believe, but I can't.
sr. member
Activity: 316
Merit: 250
Ok lets compare voting systems (which is one thing you could build in MerkleWeb, since it will be a general computer)...

MerkleWeb, as I've technically defined it within certain logical bounds, can be trusted by average people because of 1 simple fact: There will never be 1 bit in the entire MerkleWeb, for the entire history and future of running MerkleWeb, calculated incorrectly unless it is the result of a weak secure-hash or digital-signature algorithm or a whole network running only incorrect implementations of MerkleWeb (all made by Hacker Joe's half-ass voting Incorporated) that someone claims to be a MerkleWeb but did not implement the specification to the letter. If 99% of a MerkleWeb is taken over by hackers, the remaining 1% will know this (since the use of their data causes at least 1 bit to calculate wrong in combination with local data that has been proven correct to exponential accuracy) and reject their offers of further data until they start obeying the MerkleWeb specification. If you trust SHA256 and your public-key and whoever created your specific MerkleWeb-compatible system to be secure, and you trust the state of the system you're starting from (which may have been loaded from a USB stick, for example), then MerkleWeb does not reduce that security. If your implementation of MerkleWeb correctly takes the time to verify enough of the past calculations, you don't have to trust the USB stick. If your implementation of MerkleWeb takes the time to create Merkle Trees of enough of the bits in the constants section (not just the Turing Complete area), then you don't have to trust SHA256 since its hashed bits are Merkle Treed from many angles at once. You may run Windows on a much later version of MerkleWeb and Windows gets hacked into, the same as it gets hacked into on a normal computer, but that is the exact correct behavior of Windows that MerkleWeb will emulate, exactly as the NAND/bit level description of Windows and the hardware Windows runs on says to do. If you don't want your computer to be hacked, run completely in MerkleWeb and use only programs built using correct math inside MerkleWeb. If you command MerkleWeb to let your section of the data be hacked into by anyone, MerkleWeb will do exactly that, down to the last bit of accuracy. If you command MerkleWeb, through the correct use of math and logic, to count 7 billion votes, it will give you the exact integer, and people can trust that integer as the number of global votes for a certain subject. This is all proven in the core logical design of MerkleWeb, as explained in my posts above in this thread.

The security of MerkleWeb is also enforced by the fact that MerkleWeb does not offer bailouts if you predict wrong. All affected calculations must be rolled back and recalculated, at the bit/NAND level if necessary, and recalculate everything after that that changes as a result, and so on. This risk of extreme cost of recalculating if something goes unnoticed for a long time is the necessary motivation for many computers in the network to redundantly and to exponential certainty verify that has not happened and keep looking over a few bits here and there in past calculations, to avoid an even bigger cost later from such a rollback and recalculation. When a rollback is done, nothing is lost. Both versions are kept, like you can have as many branches as you want in SVN, but the one thats rolled back becomes the more popular branch since its the correct calculation. The global economy crashed primarily because people think they'll get bailed out if they screw up too bad. MerkleWeb will let everyone pay the full price of their mistakes, so they're motivated to prove to exponential certainty such mistakes are found and fixed quickly and don't expand into bigger problems. For example, in USA the income tax was not voted in by enough states to pass, but through military force it was done anyways, and later the income tax was voted in, under threat, by the other states. If that was a calculation instead of politics, in MerkleWeb, rollback and recalculate from the "no income tax" state of the system, since it was not done by the rules of the system. No bailouts, like the government continuing to collect such income tax because its been going on for a long time. This is why MerkleWeb can be trusted to converge toward every bit being calculated correctly, because it doesn't just let things go if they're too hard to fix.

Your turn. Explain why we should trust that voting system or any other.

http://en.wikipedia.org/wiki/Sun_tzu
Quote
One of the more well-known stories about Sun Tzu, taken from the Shiji, illustrates Sun Tzu's temperament as follows: Before hiring Sun Tzu, the King of Wu tested Sun Tzu's skills by commanding him to train a harem of 180 concubines into soldiers. Sun Tzu divided them into two companies, appointing the two concubines most favored by the king as the company commanders. When Sun Tzu first ordered the concubines to face right, they giggled. In response, Sun Tzu said that the general, in this case himself, was responsible for ensuring that soldiers understood the commands given to them. Then, he reiterated the command, and again the concubines giggled. Sun Tzu then ordered the execution of the king's two favored concubines, to the king's protests. He explained that if the general's soldiers understood their commands but did not obey, it was the fault of the officers. Sun Tzu also said that once a general was appointed, it was their duty to carry out their mission, even if the king protested. After both concubines were killed, new officers were chosen to replace them. Afterwards, both companies performed their maneuvers flawlessly.

I'm not advocating violence. I work in very abstract indirect ways, looking for the causes of the causes... which is what led me to understand to the need for MerkleWeb. If Sun Tzu was alive today, I doubt he would use any violence, since there are so many other tools available to change the world. The same general strategy as in that quote, is MerkleWeb's strategy for causing programmers not to write bugs and junk software, and that way MerkleWeb, a later version running on huge grids of GPUs and wireless mesh networks at incredible speeds, will have so few rollbacks/recalculates that we can make effective and simultaneously secure use of the advanced technology we waste today.

Similarly, once MerkleWeb and many competing global voting systems (which people choose to use) are up and running, and we point the world's attention to whichever few specific politicians or corporations are majority-agreed to deserve such an investigation into their possible crimes, and after we continue that until finding a few who really are guilty of serious crimes and under public view they actually go to jail for it and don't get to buy their way out, then most of the other criminals in governments and corporations will start obeying the laws that we the people choose through democracy, not they choose through buying laws. One of the most useful things we can build on MerkleWeb is a system that, through directing public attention at what a majority of people agrees on (which we can trust as real democratic agreement since criminals can't hack into it like the existing Internet), makes everyone accountable. In a real democracy, people would celebrate the laws and ways government works, because it would be only what a majority of us prefers it to be.

So lets make it happen... This is what I could use help designing:

Quote
The requirement for the cpu registers, address bus, memory, and other emulated hardware components, all combined into one kind of NAND based bit space instead of many separate components as computers are built today... the requirement for that to be a quine made of NANDs that generate their own definition in a loop, is very important. As a quine, it can rebuild itself if parts of it get broken or computers disagree on what kind of logic or state of the bits are in the system


Like in Bitcoin, its very easy to verify the network is what you think it is if at least half the computers are honestly doing the Bitcoin protocol (and even if its less than half the majority attackers still can't change the Merkle Trees you already have and trust), but for MerkleWeb half isn't good enough. I need it to work even if all the other computers in the network are creating fake data to fool us (in that case, we would have to emulate all the data locally, very very slowly), data that overlaps local real data by design of them knowing that from our broadcasts, but with full access to the entire history of every bit in the network (stored part as literal bits and part as NAND definitions of the emulation that calculates between such partial snapshots, so emulation is used as a kind of compression in MerkleWeb, since you can always emulate a past state again), in the Turing Complete environment, its certain that there are many ways to ask a question that costs you little to ask and verify the answer but costs an attacker exponentially more time, or up to the size of the whole network in bits including its history (which is impractically large, and its a scale-free algorithm so its exponential cost), to answer since the attacker does not know which parts of the network you've been calculating from the various parts of the network you receive and continue calculations on, including combinations of them. If you have a Merkle Tree leading to a certain bit in the network, then you also have a way to trace that bit back, one NAND at a time, to the start of the whole network. There's also the ability to define arbitrary functions, do SVN-like branches of the network where such functions were run, and make Godel-like statements (see the 00, 01, 10, and 11 leaf codes about bits in the first post of this thread) about the return values of those functions, and we could also make Godel-like statements about other Godel-like statements since the constants section where they're located (in Merkle Trees) is addressable the same as everywhere else. Certainly we can get the proof of exponential certainty (but not Godel-like certainty, unless we locally compute the entire network from the start which is an option for small networks) that the network is what we think it is, but the question is how deep into abstraction and game-theory we have to go to prove it for each new MerkleTree (including its childs recursively how deep?) we accept as having an exponential proof that its accurate.
Pages:
Jump to: