Author

Topic: Tau-Chain and Agoras Official Thread: Generalized P2P Network - page 156. (Read 309768 times)

hero member
Activity: 897
Merit: 1000
http://idni.org
hero member
Activity: 897
Merit: 1000
http://idni.org
MasterXchange is going to close on 15 november. How can I withdraw my MSC-TauAgoras, perhaps depositing them in omniwallet?

Yes, you can store your coins in any omni wallet. Omni has a local client, and also online wallets exist like omniwallet.org or holytransaction.com

Moreover (advanced usage): you can send omni coins to any bitcoin address. The bitcoin wallet won't know how to parse it, but you can always import that address' private key to an omni wallet at any future time.
full member
Activity: 203
Merit: 100
MasterXchange is going to close on 15 november. How can I withdraw my MSC-TauAgoras, perhaps depositing them in omniwallet?
sr. member
Activity: 434
Merit: 250
Hi Klosure!

To clarify, what I was talking about in my git analogy was graph contexts, that is to say the semantic space denoted by a named graph or the fourth element of NQuads or the bNode of a reified statement. These are key to enable the "anyone can say anything about anything" vision because what is stated in a context isn't being stated in the default / most general / objective context.

Yes, Ohad's reply was a bit unclear on this point.  I think he has been trying to keep the discussion quite simple, when the answers are unfortunately not quite so.  (This is also why I generally prefer our real-time discussions in the irc channel.  We find ourselves re-answering the same questions more often, but it also takes less total time in "back-and-forth" to communicate all of the relevant particulars of a topic.)

Tauchain operates over rdf quads, so context is as you describe, named hyper-graphs.  However, we make a few particular constraints beyond the standard rdfs (really "lbase") semantics.  Every context in the system carries not only a name (which may be "local" as in a bnode) but also a parent context reference.  These parent context references form a hierarchy (acyclic) up to the "root context" which is the global background graph of the root chain's logic.  We make this additional constraint over the normal semantic web model in order to align our data storage model to our type system's logical module model (cumulative, dependently typed records) which I'll say more on below.

Ok, here that seems to be named graphs you are talking about. You could consider IRIs as pointers that can either point to actual data (URL) or to non-existent / out-of-bound location (IRI without actual web resource) and throw an exception upon trying to access the resource (need to determine how you would represent an exception though).

Tauchain employs a two-phase reasoning cycle.  In the first phase the reasoning is closed-world and dependently typed.  Anything inaccessible is considered false, and any failed dereference is considered as an inconsistency.  This phase is used for checking block validity, reorgs, peer relations, authenticated logic, etc. and is at the heart of tauchain's operation.  The second phase, used for reasoning operations that are "off-chain," handling of side-band communications/processes, or are simply "unsafe" (like FFI) uses a more "traditional" (for semweb) open-world and simply typed logic.

You could also keep the POST / PUT  / GET / DELETE semantics of HTTP to allow to modify resources when writeable. This will be needed when a context is used to store or manipulate program state (which will be the case all the time). The nice thing of URLs is that, as your Tahoe-LAFS example shows, they can be made to point to anything so Tau would be able to work with stuff stored anywhere: the SAFE network, the upcoming Ethereum Swarm, HTTP etc.

We don't exactly want HTTP semantics, mainly because of a lack of referential transparency.  We desire our kb to have certain properties: referential transparency, controlled partial-immutability, monotonicity, and strict positivity within the data model.  Again, this is to support a consistent type system.  Instead of "crud operations" we use a stable assertion semantics.  New data/logic is asserted into a context as facts/rules.  Consistency of these assertions are enforced, meaning an assertion must be consistent with ("allowed by") earlier definition within that context and its parent contexts' established logic.  (In other words, whenever a transaction means to insert new data into a context that transaction itself must first be deemed "OK to insert" by a query to the existing rules of that context as a challenge to the transaction.)

We do not allow any explicit retraction, and do not allow any direct negative assertion.  Instead, we use (weak) negation-as-failure for both constraint and consistency maintenance.  When a new incoming rule both explicitly "implies false" and is successfully allowed into a context by the established rules of that context, any direct contradiction to the implication already existing in that context are considered as having been removed from the context from that transaction on.  In other words, asserting a new constraint that is accepted into the context serves as a facility for "deletion" of any existing data that violates the new constraint.

By this mechanism we do not need separate "CRUD operations" and can keep the knowledge-base as simply a tree of append logs, with nodes per-context.  This simplifies the merkle processes for both the chain management, and the authentication model.

I think we need to stop using that "it will be up to user defined rules" narrative because although it is true, it doesn't preclude the fact that the initial Tau Chain will have to be a fully specified and functional system long before the envisioned power-users come and change everything.

HEH.  Ohad's response here was a bit of a social side-effect of one of our (perhaps seemingly "strange") beliefs about our work.  We very strongly desire to avoid biasing our "experiment" at all costs, and this means that we intentionally avoid public speculation or commentary on certain aspects of how we "expect" the chain to be managed.  Further, we strongly believe that it is neither our place nor our right to make these decisions prior to genesis.  These concerns are for the "future tauchain user people" to address, and indeed finding the "best way" forward with this is central to the whole intent of the tau "experiment" itself.  However, you are correct that we do need a fully specified (and not just any specification, but a stable and sustainable one) system at genesis.  There are, however, two aspects to this concern: the core protocol itself (reference client) and the initial logical state encoded into the data/rules found in the genesis block.

We only intend to "prescribe" from our ivory tower the former, the protocol itself.  We feel it is important that for the genesis block initial logic we do not simply dictate down some initial rules-from-on-high, but instead we hope to gather insights, suggestions, proposals, alternatives, from as broad a community involvement as we can manage to assemble at that time to hopefully reach some initial informal consensus around our initial formal consensus model.  Wink

At the beginning, it will be you, HMC (is he still involved btw? Haven't seen a single post of him since you created this thread and you seem to be running the git repo as a one-man-show these days, would be nice if he could chime in and say "Hi"!) and a handful of early users who will be busy enough trying to understand the thing and rather unlikely to have much to say about the way the original network should be bootstrapped.

Hi!  I am still involved.  You probably won't see me on the github (for a variety of reasons) but I regularly code up samples, examples, explanations, etc for the participants in the irc channel.  Mostly, however, what I do is answer questions and resolve confusions, it seems.

The git repo is not quite as much a one-man-show as it might appear.  We basically have 3 active developers, not counting myself, attempting to finish an as-close-to-ideal-as-we-can-get implementation of the core logic.  We have several "draft passes" at the fundamentals of the inferencer, but are still working to finish meeting our requirements necessary to support our primary use cases.

It is sometimes a flurry of developments and sometimes almost-painfully slow going.  Anyone reading this who feels that they might have something to contribute to development should feel encouraged to stop in and try to help out.

I understand that future plasticity of the protocol is important and probably one of the key strengths of Tau, but it's difficult enough to understand what is the actual plan to start with, so I (and I guess, given the level of participation so far, most people reading this thread) would be content working on the approximation / fallacy that the initial network will be a centrally designed thing so that we can spend more time understanding what will be there at launch, and less time trying to get around a non-formalized specification that feels a bit like quicksands at the moment.

Although it seems to be the "most often proposed approach" we really hope to avoid launching with a "fully centralized" genesis.  Even though it seems that almost everyone involved says (jokingly? seriously? who knows..) they'd be perfectly fine with an initial rule-set along the lines of "any future rule change just needs to be signed by both Ohad and HMC" we really find this to be both quite undesirable and quite unfair both for ourselves and for our hypothetical future users.

The total opposite proposal, a system of "full anarchy" (where initially anyone may simply mutate any rule) has also been brought up a few times, but we recognize that such an approach is trivially unstable and self-defeating.

I can't say what our genesis rules will look like.  I can tell you anything you'd want to know about the protocol itself, the core logic and type theory, what ultimately could/couldn't be done with the system, and any other aspect or concern leading right up *to* the moment of genesis, but "what genesis will be like" is neither something that I will speculate about, nor something worth any speculation, at least not until it is time to actually sit down and write up real (codified in the logic) proposals for genesis.  That being something that I firmly believe should NOT be done by one or two people in a vacuum, anyway.

Ok, I'll probably work on that app when the core is ready.

We have a series of "proof-of-concept" tests planned for tau, which we call "litmus tests" and are designed to verify (in stages) that our construction of the system does support the intended use cases in a stable way.  Many of these tests would have quite a bit of overlap with something like "git on tau" although tauchain itself is not very "git like" in many ways. 

Regarding the question of how to incentivize DHT participants, I think you'd probably do yourself a great service by starting with an existing DHT network and add later a native one once other more prioritary items have been dealt with. SAFENet is nearing completion and should be live by the time Tau Chain is ready to launch. There is also Ethereum Swarm in the pipeline. That could be a good temporary solution, and it will allow Tau to ride on the coat-tails of SAFE and Ethereum. Once the Zennet side of things is live, you can create a DHT over Zennet. I think Tau Chain shouldn't try to reinvent the wheel and instead use other pateforms as much as possible to reduce the time to market. As you say, nothing is set in stone, so such decisions will be easily reversible later down the line. If you create built-ins to allow DHT-as-a-service, Blockchain-as-a-service, Storage-as-a-service, Computing-as-a-service etc, all it will take to add new services is to implement clients in Tau or bridge native clients in using the FFI, and have them register with the service manager underlying the "as-a-service" built-ins. Tau Chain can be the glue that binds existing resources on the Internet: a badly needed thing! That's also more in the spirit of the original Zennet.

It should be noted that the DHT and blockchain semantics used by tau are to be defined in the on-chain logic.  In a sense, tauchain is self-describing.  Only the necessary interface signatures for both the dht primitive operations and the blockchain primitive operations are specified in the core protocol, their implementations themselves are left to the on-chain logic.  This precludes using "an existing DHT network" explicitly.

Further, it should be noted that we have no desire to "ride" anyone's "coat tails" anywhere.  Our work is independent, should stand for itself, and should succeed or fail of its own merits.

Finally, while interoperability with other networks/services is desirable and possible under our design, this is an aspect that has to be treated with some specific care.  The tauchain system itself can only operate and communicate within it's logical framework, which means that the only protocol it can actually speak is rdf.  For many networks/services some extra "bridge" will be required, and such a bridge necessitates certain trade-offs with regard to the assurances and capabilities that the system provides.

Now, as for the original point of incentive, this is a bit tricky...  We only care directly about incentive for the root chain, and assume any other contextual incentive would be defined contextually, of course.  However, the root chain is, in a funny sort of way, "self-incentivizing by default."  The root chain should never need to offer an explicit reward for resource contributions, because it's continuing operation *is* itself the reward!  The continued existence and continuation of the root chain should be enough, alone, to give sufficient incentive for resource contribution to secure the continued existence of the root chain.  Because the system as a whole "lives or dies" by its' root chain, and in order for any contexts to inter-operate they must do so through consensus on the root chain, it is in everyone's best interest to contribute.

I can work through a more detailed example of how this incentive model works out in practice, but this would probably be another case where the discussion would best be had in the irc.

Following the discussion in point 2, I think the only thing that would be specific about how to write or read in a context is the transport that is used to access this context. If a URL is http://, the thing (whatever it is) that handles access to that resources will have to offer a server socket, handle HTTP over that socket, and accept HTTP verbs like GET, PUT, POST, DELETE. This is more than enough to deal with the context. If the transport is something else, like for instance SQL or the file system, the commands will be different. All that should be wrapped by one more level of abstraction so that the high-level logic only need to deal with high-level objects like standard object containers (list, vector, map etc.), standard binary objects (blob) and standard function objects / lambdas.

There are actually two concerns here, interface with the system itself and interface with/between individual contexts.  The interface with the system itself is defined in terms of the type theory, and has only the "lowest level" objects of dependent records of pi/sigma types.  Interface with individual contexts is done in terms of an effect type system, some "built-ins" logic, and the static dht/blockchain signatures.  Any other form of interface would be context specific, and defined within the logic of those contexts.

That's interesting. I'm not familiar with ML but are you saying that not only logical terms are typed? Predicates are typed too? As functions of constant return boolean value 1 maybe? And propositions.. as a 3-tuple of typed objects? And clauses / rules ... as disjunctions of propositions would be a lists of tuples maybe? And graphs an ordered conjunction of clauses.. Another list maybe?

First, here when Ohad says ML he is referring to Martin-Lof Type Theory which is more commonly abbreviated as MLTT.  (ML is ambiguous as an acronym, so confusions could easily be had.)

Under a dependently typed lambda calculus (such as MLTT) there is little distinction between terms and types.  Terms may be parametrized by terms or types, and types may be parametrized by terms or types.  Indeed this is where the name comes from, any dependency is allowed unlike in something like haskell where terms may depend on both types or terms but types may only depend on other types, not on terms.

Propositions are not functions of boolean return, but are types which are parametrized by terms and/or types and which map to types.  You can think of them more like functions which return types, not as functions that return booleans.


If it a binary tree, or a more general form of tree?
If it's a binary tree, doesn't that get in the way of code reuse?

It is a general tree.  Further, the leaves of this tree are made ordinal by the cumulative universe hierarchy.  This gets into some of the nitty-gritty details that Ohad was trying to avoid, but we could point you in the direction of quite a few references on the matter if you'd like.  (Our "go-to starting point" lately is "Programming in Martin-Löf's Type Theory" from Chalmers.)

I may repeat myself, but I think we should focus on getting a "good enough" solution for the start, and rely on the universality of the protocol to postpone the headaches of how to converge to the ideal solution later. There are plenty of projects that deal with the issue of incentivizing peers to store data reliably over a DHT. We should start by relying on these and then work on implementing a native alternative if the quorum feels that this is a top most priority. The other advantages is that it will help Tau at the beginning to be a part of the ecosystem of already popular networks. Tau is so deeply different that there is nothing to fear and nothing to lose by cutting corners at the beginning.

Ouch, there is indeed "nothing to lose" except for the long term viability of sustainability of the system itself.  If we don't take some great care "up front" then the system as a whole will tend toward trivial self-destruction.  (We have quite a few thought experiments to such an effect, btw... more good fodder for discussion in the irc!)

I really don't think most early users will have the level of theoretical sophistication necessary to express themselves in Tau logic. This may happen at some point once people have caught up or the project has gained enough momentum among academic circles, and after enough efforts have been poured in creating ontologies to express more elaborate thoughts. But you shouldn't count on that at the beginning. Again, at the beginning, it will be you, HMC, and a handful of early users who will bootstrap the network, and communication will be done much more effectively over a good old forum or slate.

Quite frankly, I hope that you are actually very wrong on this particular point!  Grin

We intend to include, in the core, support for an OWL inference layer as a built-in "starting point" for hopefully *very* high level reasoning work.  In fact, one of our biggest hopes is that we will be able to support, from day 0, specification of tau data and rules from a controlled natural language, such as ACE.  Further, we expect that much of the earliest work on tau will be in the direction of contexts to support social, crowd-sourced models for systematically converting fully natural language knowledge and constraint into tau encoded contexts.  Most of our "long term vision" surrounds this notion of users with low technical/theoretical sophistication being able to express their notions both with and to tauchain, easily and without significant specialized domain knowledge outside of a basic fundamental understanding of tau's mechanics.  Further, our hope is that within "almost-no-time-at-all" tau will be able to support communication and coordination much more effectively on-chain, over itself, than on the "good old forum or slate."  Indeed many of our group's intended "side projects" are along such lines.  (Development tools "of tau, for tau, by tau" on chain.)

So there is nothing preventing us from already discussing what the day-0 Tau Chain will look like and creating a spec.

Only ourselves!  We feel that it is much more important to focus on the protocol core and reference client, since genesis semantics "shouldn't be up to us" anyway, and any attempt to specify this logic before the core is complete is "too early."  Further, we think that imposing any of our own opinions and beliefs into the conversation "before the conversation has begun" is (potentially) downright dangerous to the ultimate outcome of the project.  We'll gladly discuss (to a point) some general ideas of ways that it "could work" but will not go so far as to begin to specify the genesis rules until it is actually time to do so "for real."

I hope this post has served to clear up some of your questions, and has provided a clearer picture of the design and intent of tauchain.  Again I'd like to encourage you to join our irc discussions as there is really no better way to come to an understanding of what we're doing, where we're at with it, and where we intend to go.

Thank you for your interest, and I look forward to further discussion!

--HMC
hero member
Activity: 897
Merit: 1000
http://idni.org
Thanks to Mr. Virgilio Ruilova for a Spanish translation of "Code and Money: Which Comes First"
https://virgilioruilovacastillo.wordpress.com/2015/09/24/code-or-money-tau-chain/
hero member
Activity: 897
Merit: 1000
http://idni.org
There is quite much to be said regarding your last comment, but this time HMC will answer.
newbie
Activity: 50
Merit: 0
1. Addressing contexts: at the syntax level, we address contexts as RDF languages do. Let's look at some existing example, like from http://www.w3.org/2000/10/swap/test/meet/blue.n3:

@prefix p:   .
@prefix m:   .


        p:GivenName     "Fred";
        p:hasEmail              ;
        m:attending     .


        m:homePage               .

We can see that m:homePage is a shorthand for . This is declared by n3's @prefix keyword. So contexts refer to each other's resources by the syntax . This lets the user to easily refer to existing code (RDF is all about making it easy to link data [graphs], semantically, as a web). By the way, the implication predicate ":a => :b" (the arrow) is a sugaring to , and indeed this is how it is represented internally in tau and other RDF reasoners. Everything in RDF has that form.

I'm probably misunderstanding your answer, but aren't you talking about namespace prefixes rather than contexts here? Your use of "" really is a QName where what you call "context" is what I call a "namespace prefix". You may be talking about context in the semantic meaning as introduced by the RDF with contexts proposal, but then it doesn't seem to be the case in your example as you haven't put any marking to inherit and overload vocabulary of a more generic context.

To clarify, what I was talking about in my git analogy was graph contexts, that is to say the semantic space denoted by a named graph or the fourth element of NQuads or the bNode of a reified statement. These are key to enable the "anyone can say anything about anything" vision because what is stated in a context isn't being stated in the default / most general / objective context.

2. So at the syntax level we can specify a prefix, and of course it would better be content-addressable, i.e., given the prefix, I should be able to fetch the code. Once it's URLs it's easy to see how it can be done over the "regular" web, but how can this be done in a closed system like tau?
Indeed we will have to have another way to address content. The mechanism in mind is tahoe-lafs' one: https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/specifications/uri.rst but note that Kademlia already handles many of the concerns: given the hash of the file part, you can both verify the file's content, and (by the XOR metric) locate nodes storing that file in logarithmic time.
Ok, here that seems to be named graphs you are talking about. You could consider IRIs as pointers that can either point to actual data (URL) or to non-existent / out-of-bound location (IRI without actual web resource) and throw an exception upon trying to access the resource (need to determine how you would represent an exception though). You could also keep the POST / PUT  / GET / DELETE semantics of HTTP to allow to modify resources when writeable. This will be needed when a context is used to store or manipulate program state (which will be the case all the time). The nice thing of URLs is that, as your Tahoe-LAFS example shows, they can be made to point to anything so Tau would be able to work with stuff stored anywhere: the SAFE network, the upcoming Ethereum Swarm, HTTP etc.

3. Human-readable URIs can be thought and implemented later on the root's rules or locally in a context, with say a Namecoin like mechanism. Nevertheless, you can always refer to "regular" ("unsafe") http contexts, and whether clients will actually recognize them or not is up to their rules.
I think we need to stop using that "it will be up to user defined rules" narrative because although it is true, it doesn't preclude the fact that the initial Tau Chain will have to be a fully specified and functional system long before the envisioned power-users come and change everything. At the beginning, it will be you, HMC (is he still involved btw? Haven't seen a single post of him since you created this thread and you seem to be running the git repo as a one-man-show these days, would be nice if he could chime in and say "Hi"!) and a handful of early users who will be busy enough trying to understand the thing and rather unlikely to have much to say about the way the original network should be bootstrapped.

I understand that future plasticity of the protocol is important and probably one of the key strengths of Tau, but it's difficult enough to understand what is the actual plan to start with, so I (and I guess, given the level of participation so far, most people reading this thread) would be content working on the approximation / fallacy that the initial network will be a centrally designed thing so that we can spend more time understanding what will be there at launch, and less time trying to get around a non-formalized specification that feels a bit like quicksands at the moment.

4. More to your point, tau's root does not come with a predefined git-like mechanism and versioning. Maybe better looking at it as something that you can build git-like repos on top of it. The logic of inserting/updating/accessing data over dht/chain is up to the rules: why would some node store my dht data "for nothing"? Tau comes with technological facilities - but not with agreeing how to use them. So your design of decentralized source control can be done in a context. At this sense, it's like decentralized appstore with one app being decentralized github.
Ok, I'll probably work on that app when the core is ready.

Regarding the question of how to incentivize DHT participants, I think you'd probably do yourself a great service by starting with an existing DHT network and add later a native one once other more prioritary items have been dealt with. SAFENet is nearing completion and should be live by the time Tau Chain is ready to launch. There is also Ethereum Swarm in the pipeline. That could be a good temporary solution, and it will allow Tau to ride on the coat-tails of SAFE and Ethereum. Once the Zennet side of things is live, you can create a DHT over Zennet. I think Tau Chain shouldn't try to reinvent the wheel and instead use other pateforms as much as possible to reduce the time to market. As you say, nothing is set in stone, so such decisions will be easily reversible later down the line. If you create built-ins to allow DHT-as-a-service, Blockchain-as-a-service, Storage-as-a-service, Computing-as-a-service etc, all it will take to add new services is to implement clients in Tau or bridge native clients in using the FFI, and have them register with the service manager underlying the "as-a-service" built-ins. Tau Chain can be the glue that binds existing resources on the Internet: a badly needed thing! That's also more in the spirit of the original Zennet.

5. A context is typically self-describing. Means, for example, it has to specify the logic of updating its own code. So if I subscribe to your decentralized git context, it will probably include rules of serving as a dht node in your git network, and, that context's rules may also specify the logic of when and how to retrieve and execute a new version of that context's code. It will probably also specify who have permissions to change the code, for example.
Maybe the main point I'm trying to make here is that tau speaks about code being executed, while git doesn't care about execution, it only stores code. So once your code contain rules that are being followed by nodes, it specifies how the code itself change, and your git context does not need some mechanism from above (like built-in in tau or in the root chain) for managing its own code or code repositories it keeps. Once I subscribed to your context, I (possibly) take the first version and replay your code from that point.
Following the discussion in point 2, I think the only thing that would be specific about how to write or read in a context is the transport that is used to access this context. If a URL is http://, the thing (whatever it is) that handles access to that resources will have to offer a server socket, handle HTTP over that socket, and accept HTTP verbs like GET, PUT, POST, DELETE. This is more than enough to deal with the context. If the transport is something else, like for instance SQL or the file system, the commands will be different. All that should be wrapped by one more level of abstraction so that the high-level logic only need to deal with high-level objects like standard object containers (list, vector, map etc.), standard binary objects (blob) and standard function objects / lambdas.

6. Aside the contexts mentioned above, there is another structure, which enforces tree structure over practically all tauchain's data, and it touches the very root of tau's logic, namely Martin-Lof type theory. There are many theories with depenent types, for example Coquand's Calculus of Contructions, implemented in COQ, vs Idris that implements Martin-Lof). The crucial difference is ML's universe hirerchy. You can think of it as types being subtypes of other types while everything is eventually a type.
That's interesting. I'm not familiar with ML but are you saying that not only logical terms are typed? Predicates are typed too? As functions of constant return boolean value 1 maybe? And propositions.. as a 3-tuple of typed objects? And clauses / rules ... as disjunctions of propositions would be a lists of tuples maybe? And graphs an ordered conjunction of clauses.. Another list maybe?

So we get a tree of types.
If it a binary tree, or a more general form of tree?
If it's a binary tree, doesn't that get in the way of code reuse?

For example if you have two disjunctions of clauses say
1) A v B v C
2) E v B v C

Constructing the tree by the left side gives
1) (A v B) v C
2) (E v B) v C

On the other hand had you associated clauses from the right, you could have reused (B v C) without any change to semantic or computational complexity of the derivation.
1) A v (B v C)
2) E v (B v C)

How do you deal with that?

2. Storing and processing dht and chain, costs. We do not expect nodes to voluntarily serve arbitrary network request (except the first stages). Some logic for accepting hashes, accepting dht parts, or mining rewards must be considered. Which goes back to decentralized democracy - it is vital to have those settings, and even more vital - to set a plan to how those settings may change with time.  Same for access control and spam. So we do not come up with any predefined currency, and incentives should be determined by the first users, and they have the freedom to set any method they find as good. Nevertheless, apps like Agoras may incentivize miners etc. by their own, so all applications over tau can independently attract miners for their own sake while all tau users might enjoy it (like in difficulty mining, there is no loss to mine for the root chain, only gain for that incentivizing app and to the whle network).
I may repeat myself, but I think we should focus on getting a "good enough" solution for the start, and rely on the universality of the protocol to postpone the headaches of how to converge to the ideal solution later. There are plenty of projects that deal with the issue of incentivizing peers to store data reliably over a DHT. We should start by relying on these and then work on implementing a native alternative if the quorum feels that this is a top most priority. The other advantages is that it will help Tau at the beginning to be a part of the ecosystem of already popular networks. Tau is so deeply different that there is nothing to fear and nothing to lose by cutting corners at the beginning.

Over tau, we will be able to express our thoughts and goals together, in a clear unambiguous way, do automatic reasoning and simulations over them, and prove consequences. Where will it take us? I can't guess even for the short term.
I really don't think most early users will have the level of theoretical sophistication necessary to express themselves in Tau logic. This may happen at some point once people have caught up or the project has gained enough momentum among academic circles, and after enough efforts have been poured in creating ontologies to express more elaborate thoughts. But you shouldn't count on that at the beginning. Again, at the beginning, it will be you, HMC, and a handful of early users who will bootstrap the network, and communication will be done much more effectively over a good old forum or slate. So there is nothing preventing us from already discussing what the day-0 Tau Chain will look like and creating a spec.
hero member
Activity: 897
Merit: 1000
http://idni.org
newbie
Activity: 50
Merit: 0
I have a few more questions regarding the role of the root chain.

Here is what I understood so far: please correct me if I got anything wrong
- the base Tau client is an empty shell that contains only a reasoner / theorem prover with capability to make lamda auth traces of the derivation tree and a bunch of builtins including DHT, blockchain and FFI APIs. All the logic that will put the client in motion is all written as graphs of quads in a bootstrap file.
- depending on the bootstrap file used, the Tau client can bootstrap the "Tau Chain" network, or something else completely, or both at the same time.
- the TauChain network uses two types of distributed network paradigms: a DHT (like bittorrent) and a blockchain (like Bitcoin).
- The TauChain blockchain will be used for two things: as a time reference (one block = one quantum of time) and as a code repo for higher level logic (the applications).
- the blocks of the TauChain root chain won't contain data, but merkle roots of merkelized datasets
- the data itself will have to be retrieved from the DHT by reconstructing recursively the merkle tree top-down starting from the merkle root
Please do confirm if anything above is incorrect as there is no point trying to explore further if I'm not in the right forest.

What I'm less clear about is how the root chain itself works.

When trying to imagine how I would use a blockchain of merkle tree hashes to make a code repo, the most obvious thing that comes to mind is git because what git does is very similiar to a chained list of merkle trees. It hashes every leaf file of the code repo, represent directories as lists of files and subdirectories hashes which it hashes too all the way up to the root directory of the repo, then it puts the hash of the root directory of the repo in a commit object which it hashes too and that corresponds to a given snapshot of the repo. For each modification, some node in the merkle tree changes, so the hash of the ancestors of that node in the tree are all recomputed upward until the root hash itself that is stored in a new commit. Git keeps the continuity between the snapshots by chaining the commit objects: each commit contains the hash of the previous commit on the same branch as well as the hash of eventual last commits of merged branches. A typical git repo has multiple active branches at the same time.

Now, if the Git analogy is close enough, here is how I imagine the Tau chain would work in that respect. One TauChain block would contain the list of the hash of the HEAD commits of every branch that changed since last block (if there are branches in Tau). The HEAD commit objects would contain references to one or several previous commits from previous blocks as well as the merkle root of a data structure that holds the content. As in git content could be binary blobs (so that general content like raw data, natural language text, source code files, pictures etc can be stored and referred to from within Tau logic), structure objects (frames in our case) like a directory (like in git, to give a hierarchical structure to general content, and also because Tau will need that type of construct to deal with the local filesystem anyway), or unlike git a semantic knowledge base. Is that close enough?

Regardless of whether the above analogy is close enough, how do you plan to merkelize the knowledge base? Unlike file system directories, graphs of quads do not have an inherently top-down hierarchical structure. You could try to create one artificially by using a spanning tree of the graph of quads, but then when the graph changes and you recompute the spanning tree, you are likely going to have something completely different with a different root, and completely different branches, so you wouldn't be able to reuse subtrees of the previous merkle tree like git does with the file system hash tree. One way that would work better is to use "delta" semantics, i.e. commit lists of add-quad / remove-quad commands that need to be applied to the previous state of the graph (a bit like SVN does). But then you would have to fetch and store the whole blockchain to be able to get the complete picture, albeit you could prune all the dead subtrees just like git does. Please confirm.

Another thing I'm not clear about is how context affects the way the knowledge base is merkelized. Clearly context allows to make totally disjoint islands of logic so you could see different contexts as different knowledge bases completely, except that they may refer to one another. With the git analogy, you could have 1 branch = 1 context, but then how to deal with the huge proliferation of contexts? You can't keep a hash for each context in each block as blocks would end-up being huge but you can have millions of branches and update only the ones that changed in a new block so that if no new commit was posted on a branch in a given block, it is assumed that the HEAD of that branch didn't change.

One more thing that I am not clear about is how do you control access to the repo? Do you tie contexts to digital signatures so that only agents that can sign a commit with one of the authorized public key for the context may be allowed to have the commit transaction added to the blockchain?

How do you prevent spam? Do people pay a transaction fee to commit on the chain (I don't see what else you could do frankly)? If that's the case you need a currency as a very basic element of the design. What will be that currency?

You say that people will be free to customize the rules so that TauChain isn't tied to a specific currency, but at the beginning you, HMC and the few of us around who will have managed to wrap their head around the concept will be the only ones able to "customize" the logic, and all of us will have a vested interest in putting Agoras in the center. So we can say that in all likelihood Tau Chain will start with Agoras as native currency. Is that correct?

Now since we are talking about a distributed consensus started among other things around the agreement to use Agoras as a native currency, and using PoW or PoS to protect the network, any attempt from renegade users to create an alternate set of validation rules (including for instance a change of native currency) would end up being a fork and would therefore not be anymore "Tau Chain"... So wouldn't it make things easier to understand to just say that the Tau client and language are currency agnostic, but the original Tau Chain will be relying on Agoras as native currency although there is nothing stopping people from copy/pasting the code of the chain and starting a fork using the same DHT network, same client, and 99% of the same code. Or am I completely off track?
hero member
Activity: 910
Merit: 1000
Decentralized Jihad
Thanks for the updates and keep them coming. Good luck. Smiley
hero member
Activity: 897
Merit: 1000
http://idni.org
I believe I must be missing something here, so let me play devil's advocate for a second.

To continue the UPS example, say you don't know what package to expect exactly and you want to be sure that UPS haven't tampered it. So the UPS man can prove you that he recieved the package from another UPS man and so forth up to the sender, all with signatures (this authenticates the IO boundary), and, that once it was at the hands of one UPS man, he did only what he shouldv'e be doing and not replacing the package's content, for example (and this is the execution proof).
Except, the UPS man is supplying the hash along with the package, so you can't prove he didn't replace the package's content and give you a hash of a fake log. (This is because the UPS man authors the log himself without any outside supervision.)

Auth types may imply side effects rather than direct output. Taking the bank example, if you perform a transaction, you'd like to know that the bank did register it correctly in its record, but it's naturally not a part of the output.
You are getting a hash from the bank and the bank alone. It can make that hash from any execution path it chooses, without actually running it. As long as the path matches up with the IO boundaries that the client can verify, it won't be able to tell the difference. Also, even if the bank did give you the hash from the actual execution path, there is nothing which prevents it from altering its record after the fact.

Is this correct? Is there something that I am not seeing here?

You could always hash whatever you wanted and supply proofs even before lambda-auth, but imagine how the code would look like. The data might be complex and might go through complex processing with sometimes important side effects or outputs in the middle, or recursive types, or dependent types. So if the bank want to auth my balance, it has to calculate it first, while the proof depends on the calculation's inputs. To implement such kind of proof, the bank will have to write code that proves and hashes all relevant data and processing.
Lambda-auth doesn't let you do things that weren't possible before, but it makes the whole thing feasible for complex development. All you need is to mark types as "auth", and the "authness" propagates automatically, together with the proof ofc. So I can handle you a complex program with execution proof hash, and we both shouldn't bother how to follow the path to the types in concern (the ones we want to auth) and skip unauth types. It is also beyond ease of development: without marking types as auth and unauth, tracking a single interesting value might cost infeasible computational complexity, because nothing focuses that search.
I hope I touched some "missing piece", otherwise please tell.
newbie
Activity: 32
Merit: 0
I believe I must be missing something here, so let me play devil's advocate for a second.

To continue the UPS example, say you don't know what package to expect exactly and you want to be sure that UPS haven't tampered it. So the UPS man can prove you that he recieved the package from another UPS man and so forth up to the sender, all with signatures (this authenticates the IO boundary), and, that once it was at the hands of one UPS man, he did only what he shouldv'e be doing and not replacing the package's content, for example (and this is the execution proof).
Except, the UPS man is supplying the hash along with the package, so you can't prove he didn't replace the package's content and give you a hash of a fake log. (This is because the UPS man authors the log himself without any outside supervision.)

Auth types may imply side effects rather than direct output. Taking the bank example, if you perform a transaction, you'd like to know that the bank did register it correctly in its record, but it's naturally not a part of the output.
You are getting a hash from the bank and the bank alone. It can make that hash from any execution path it chooses, without actually running it. As long as the path matches up with the IO boundaries that the client can verify, it won't be able to tell the difference. Also, even if the bank did give you the hash from the actual execution path, there is nothing which prevents it from altering its record after the fact.

Is this correct? Is there something that I am not seeing here?
hero member
Activity: 897
Merit: 1000
http://idni.org
I don't understand why anyone would want to ask that question. It would be like asking the UPS man, "which route did you take to get to my house?" rather than, "What's in my package?"

To continue the UPS example, say you don't know what package to expect exactly and you want to be sure that UPS haven't tampered it. So the UPS man can prove you that he recieved the package from another UPS man and so forth up to the sender, all with signatures (this authenticates the IO boundary), and, that once it was at the hands of one UPS man, he did only what he shouldv'e be doing and not replacing the package's content, for example (and this is the execution proof).

A question that would be important would be, are the outputs correct? And since a verify still has to run through the proof regardless if they have the hash of it or not, why do this extra work to verify the hash of the execution?

Auth types may imply side effects rather than direct output. Taking the bank example, if you perform a transaction, you'd like to know that the bank did register it correctly in its record, but it's naturally not a part of the output.
newbie
Activity: 32
Merit: 0
Let's take another example. Standard Bitcoin client comes with some logic of ordering transactions by transaction fees, when mining. Is there a way to know that the miner indeed ran a standard bitcoin client? If they provide the hash of the execution tree, we can make sure that the result came from the code it claimed to come from. Not all nodes even need to verify that proof, one refutation is enough. The refuting clients' reciepts (authenticating the IO boundary) may or may not be found at every given moment, but that's a risk the miner has to take, and the network may set rules to what happens when a refutation is found.
I don't understand why anyone would want to ask that question. It would be like asking the UPS man, "which route did you take to get to my house?" rather than, "What's in my package?"

A question that would be important would be, are the outputs correct? And since a verify still has to run through the proof regardless if they have the hash of it or not, why do this extra work to verify the hash of the execution?

I'm not saying that I don't see the point in lambda auth. I definitely do, since it allows a client to not need access to an entire database of a server it wants to verify. But this is lambda auth of the data, not the execution, so I'm still curious as to why the route taken is as important as the package delivered.
[/quote]

hero member
Activity: 897
Merit: 1000
http://idni.org
Do you mean you merkelize the SAT solver derivation tree? Only the path that led to the empty clause / answer predicate clause or the whole tree including failed attempts? If you publish only the merkle root, that would mean that someone who wants to verify the proof has to run again the resolution, rebuild the derivation tree, merkelize it and make sure the root hashes match? If that's the case, the one who verifies needs to have exactly the same knowlege base or the derivation tree might end up being different. How do you speficy the exact applicable knowledge base used to run the query?

BTW what solver are you using? Do you work on conjonctive or disjunctive normal form?

That's all correct, except, tau is basically an SMT solver rather SAT, and all is written from scratch. I don't rule out implementation that relies on existing/dedicated SMT solvers, but that's the status for now. They all (inference, SAT, SMT) follow the idea of resolution/backtracking/DPLL, all pretty much the same.
So the n3 code is itself the SMT problem, isomorphically, and we don't convert it to some smtlib or dimacs format, but we simply follow the resolution over quads.
The proof tree is rooted at the query, and the next level contain terms derived from the first level, using the rules in the given code. This tree that is built implicitly during the inference process is also hashed on-the-fly, supplying a compact proof that the whole process took place.
As in my last comments, indeed there is no point in authing and verifying whole programs, but only Auth types. The programmer has full control over what is hashed and what isn't (up to typesystem constrains, like, IO cannot be Auth type).
newbie
Activity: 50
Merit: 0
Do you mean you merkelize the SAT solver derivation tree? Only the path that led to the empty clause / answer predicate clause or the whole tree including failed attempts? If you publish only the merkle root, that would mean that someone who wants to verify the proof has to run again the resolution, rebuild the derivation tree, merkelize it and make sure the root hashes match? If that's the case, the one who verifies needs to have exactly the same knowlege base or the derivation tree might end up being different. How do you speficy the exact applicable knowledge base used to run the query?

BTW what solver are you using? Do you work on conjonctive or disjunctive normal form?
hero member
Activity: 897
Merit: 1000
http://idni.org
newbie
Activity: 32
Merit: 0
Proof of Code Execution: http[Suspicious link removed]c

This looks interesting, and I think I've learned enough about logic programming to understand it. Just one question, though... (from the blog post)

Quote
Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash.

You cannot be sure the computation wasn't run within some sort of sandbox environment meant to look like the real thing, correct? In fact, in order to verify the proof, you'd have to create such a sandbox to replay it. So, what is the purpose of using this?


Lambda-auth (links below) suggests a framework such that every type can be auth or unauth. We plan to have the same design and distinction on our typesystem. The point is that you don't have to have all your program authed, but you can specify any "key points" to be authed. Naturally indeed one doesn't care about the whole execution of the program but only certain parts of it. An example would be random number generation. Assume one doesn't really care about the functionality of some website because it's all pretty much transparent, except maybe random number generation. So that certain procedure may be authed. Given that, we can collaboratively trust a single point to draw random values that may affect all of us.
Another example would be executing a statement query from some online bank. The bank can auth that query and supply a proof that your reported balance is correct.
Blockchain builtins provide auth types for time ordering, and this is of course a very powerful combination.

http://amiller.github.io/lambda-auth/
http://www.cs.umd.edu/~amiller/gpads/gpads-full.pdf


Ok, I think I get it now. For lack of better terms, let call the party who wants an operation performed by an untrusted other party the "client", and the other party the "server".

The lambda-auth stuff basically allows the client to hold a small hash corresponding to a chunk of data (probably a huge set) that the server holds, and then the server is be able to prove that the operation occurred by looking at the updated hash. In the online bank case, it could be the ledger of the bank including withdrawals and deposit transaction signatures. For the casino, I suppose this could be a record of the executions of their "games" tied to actual payouts in an online currency in Tau.

So, if I got this right, I'm imagine that a server could prove to any party that it has upheld its obligations as follows:

1. The server publishes the latest signed lamba-auth hash of its data (which would be its entire ledger, etc.) into a timestamped format somewhat like a block chain.
2. For every incoming client request, both the client and the server sign the client's request which both the client and the server keep as a receipt.
3. Then, for discrepancies, either there can be either:
 a. An additional transaction against the client account that the client claims they did not make. If so, if the server can supply the receipt signed by the client, and point to the relevant change in its published blockchain, then it can prove that it was in fact authorized by the client. Otherwise, by absence it would show the server cheated.
 b. A transaction the client claimed was supposed to be run, was not. If the client can show the transaction receipt (that is also signed by the server), and then point to the relevant place in the lamba-auth hash in the server's blockchain and show the transaction doesn't exist, then the client can prove the server cheated. Otherwise, the server's lamba-auth blockchain proves that it did run the transaction.

If that is correct, then this can be simplified by just using the Tau blockchain in place of an individual, per party blockchain for step 1. And then compliance could be automatically enforced by making a rule within the Tau blockchain, that any party that could be proved to have cheated to have their funds frozen, or some other punishment, until the problem is fixed.

Is this about right? Is there any other functionality that I'm missing here?

BTW, I'm becoming more and more impressed with Tau the more I learn about it.

hero member
Activity: 897
Merit: 1000
http://idni.org
Proof of Code Execution: http[Suspicious link removed]c

This looks interesting, and I think I've learned enough about logic programming to understand it. Just one question, though... (from the blog post)

Quote
Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash.

You cannot be sure the computation wasn't run within some sort of sandbox environment meant to look like the real thing, correct? In fact, in order to verify the proof, you'd have to create such a sandbox to replay it. So, what is the purpose of using this?


Lambda-auth (links below) suggests a framework such that every type can be auth or unauth. We plan to have the same design and distinction on our typesystem. The point is that you don't have to have all your program authed, but you can specify any "key points" to be authed. Naturally indeed one doesn't care about the whole execution of the program but only certain parts of it. An example would be random number generation. Assume one doesn't really care about the functionality of some website because it's all pretty much transparent, except maybe random number generation. So that certain procedure may be authed. Given that, we can collaboratively trust a single point to draw random values that may affect all of us.
Another example would be executing a statement query from some online bank. The bank can auth that query and supply a proof that your reported balance is correct.
Blockchain builtins provide auth types for time ordering, and this is of course a very powerful combination.

http://amiller.github.io/lambda-auth/
http://www.cs.umd.edu/~amiller/gpads/gpads-full.pdf
newbie
Activity: 32
Merit: 0
Proof of Code Execution: http[Suspicious link removed]c

This looks interesting, and I think I've learned enough about logic programming to understand it. Just one question, though... (from the blog post)

Quote
Eventually, the hash of the tree's root is a proof that the whole computation was followed, since one can replay the computation and verify the hash.

You cannot be sure the computation wasn't run within some sort of sandbox environment meant to look like the real thing, correct? In fact, in order to verify the proof, you'd have to create such a sandbox to replay it. So, what is the purpose of using this?
Jump to: