Author

Topic: Tau-Chain and Agoras Official Thread: Generalized P2P Network - page 157. (Read 309742 times)

hero member
Activity: 897
Merit: 1000
http://idni.org
Proof of Code Execution: http://www.idni.org/blog/proof-of-exec

klosure: thanks, I'll take a look.
newbie
Activity: 50
Merit: 0
Found that on the Ethereum reddit: Compositional financial contracts
Financial contracts language written as a DSL over Coq allowing to express any finite-time financial contract in existence, reason with their logic and compose them with logic connectives to express entire financial portfolios in a structured way that can be reasoned with. This is very interesting because it allows for instance to assert delta neutrality of a portfolio in a provable way, or automatically propose optimal hedging scheme given specific expected market conditions (which could themselves be statistically extrapolated from history) and desired outcomes under these conditions .

That kind of formalism would have allowed to anticipate and perhaps prevent the multiple meltdowns of the financial industry. After the financial crisis, regulators have reviewed financial institutions portfolio and painstakingly benchmarked them against different scenarii. This took them years and enormous resources, and yet there is no way to prove that they did the job right and didn't overlook something. The results they got was obsolete on the very moment they produced it because portfolio are constantly in motion so knowing that the institution was sound at the time they snapshoted its portfolio does lnothing to prove it is still sound a couple of days later when all that is needed for a spectacular blow-up is a large naked future position.

With a language like the one proposed above, every financial institution could model their entire portfolio including proprietary one as well as the portfolio they manage for their clients and let the SEC access to it. With that, it would be a breeze for the SEC to test financial institutions soundness and proper capitalization and assert they are not playing against their customers. To do an industry wide benchmark all the SEC would need to do is to compose all the portfolios of all the financial institutions and test that syntectic portfolio against a given set of stress constraints to see where the industry as a whole would be heading in extreme market conditions. Actually, someone asks in the Q&A at the end how this relates to SEC's recent effort to try to model contracts as a DSL over Python and the answer pretty much nails it: Python isn't decidable so the SEC is barking up the wrong tree.

This really sounds like the archetypal case of a Tau chain application and a very simple one at it yet an application with far reaching consequences. Someone with a shared background and a good sense of pitch (HMC maybe?) should contact them and explain them why they should have a look at Tau.
hero member
Activity: 897
Merit: 1000
http://idni.org
I was wondering if anyone could recommend some reading material, or at least a general direction of what to study to help grasp the concepts of Tau better for those with primarily an imperative programming background (C++, Java, perl, etc.).

I've worked through some of "The Reasoned Schemer", and right now I'm learning Haskell, and planning on moving onto Idris once I get the basics down.

Does this seem like a good path? Other material?

I'd say you follow a very good path. Tau is very different from Haskell, but one better know Haskell in order to understand many of the type theory literature. Idris is the ultimate entrance to the world of dependent types, and borrowed syntax from Haskell. Tau aims to be like Idris, just with RDF syntax, and some DHT and chain primitives etc.
newbie
Activity: 32
Merit: 0
I was wondering if anyone could recommend some reading material, or at least a general direction of what to study to help grasp the concepts of Tau better for those with primarily an imperative programming background (C++, Java, perl, etc.).

I've worked through some of "The Reasoned Schemer", and right now I'm learning Haskell, and planning on moving onto Idris once I get the basics down.

Does this seem like a good path? Other material?



sr. member
Activity: 329
Merit: 250
intersting... do you know bitnation? maybe the two project can join each other...
hero member
Activity: 897
Merit: 1000
http://idni.org
Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?
One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?

The workflow of the client can be summarizes as follows:
We have a peer to peer network such that every peer holds an ontology (its rules). Peers speak between themselves at the same language too - they query each other's reasoners, though indirectly. A wrapper around the reasoner recieve the event and query the reasoner "what should I do now", together with the event's information.
Therefore, one can quite naturally add rules to what happens when some query arrives, even locally. The rules doesn't necessarily have to be explicit, as their consequences are calculated by the reasoner.
The query itself can be a computation, some code to run, and even native code: I forgot to mention that we do plan to have FFI builtins. Of course, one has to allow using them, and the typesystem should handle them correctly like such types cannot be Auth types (execution verification).
What to do when a query to run code, how much to charge, under what conditions, all amounts to local rules in the client. Therefore, Zennet's design completely lost meaning, except the pricing formula zennet.sc/zennetpricing.pdf since it all amounts to little piece of tau code. Renting computational power won't need a distinct chain or a side chain, but only the coin it uses has to have it's timestamped ledger. Other data transfer can be done using the DHT layer, or by any other p2p rules.

The builtins will have multiple levels of abstraction. DHT (like get/set) and blockchain primitives (like getWork) will be builtins as for themselves, aside more low level ones like sockets.

Back to what assets Agoras will allow trading, I tend to adhere the principle of letting the users maximum flexibility and minimum intervention. So the goal to implement a stable coin, and a market where participants may place their bids, whether it is computational power or coding or anything, and with a user interface. Readily-made rulesets for common such operations will also be supplied. In addition, there are more applications that need a currency other than markets. The one I'm planning to put focus on under Agoras is a search engine. Remember that with large enough network, tau can crawl the web overnight. And after that night you don't end up with a one line interface tht google gives you, but the whole database is open. Of course computational costs has to be taken care carefully, so we plan to make the search engine self-incentivizing. In general, we'll do our best to make Agoras the best choice as a platform for any application that involves currency.

- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in?

Yes, but the ontologies will have rich API to the DHT&Chain parameters, and can have a wide control on their flow. A simple example is the conditions for a block to be accepted. Chain builtins will also supply primitives of "before" and "after" that can be Auth types yet external, thanks to the chain's timestamping.

[/quote]
Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.

Not exactly, and as above, ontologies will control the chain only up to some extent, but the general flow of DHT&Chain is somehow hard coded.

- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?

Information is not stored on the chain - it is stored on the DHT layer, and in case a timestamp is needed, only a merkle root is enough to get into the chain. The root chain has to contain the protocol's definitions (the client's code) and merkle roots for timestamping.
In any case, we always speak about one chain only, and more chains only arise at the scope of pegging them in the root chain, as sidechains do.

- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

When I say 'chain' on our scope I always mean 'blockchain'.

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

An incentive scheme is indeed an extremely important (if not vital) to the future of the network. Still, the first users have to define it, or better, define how it can be changed. I do have thoughts and opinions about that, and I will vote and propose accordingly over the system.

Thanks for the good questions, will be happy for more.
newbie
Activity: 50
Merit: 0
Thanks for the replies Ohad.
One earlier question left unanswered

Do you have specs about what exactly Agora is going to be. I mean other than the story of Bob and Alice . What kind of assets will Agora allow to trade? Will it feature a full fledged distributed exchange like Ripple? Will it support other market paradigms like auctions or prediction markets?

One more question: there are many references to "chains" in your answers, blog posts and papers.
- There was the blockchain planned for Zennet which was supposed to be used for handling the transactions in Zencoins, was supposed to use PoW, and wasn't supposed to have any inflation, which meant that miners would be rewarded with transaction fees alone.
- In the context of Agora, since Agora is a superset of Zennet, I assume that the originally planned blockchain comes along except that instead of being implemented as a native client, it will be emulated using Tau logic and network built-ins, and now the tokens are called Agora coins, everything else being the same (is that right?). But is it still PoW? If it is do you plan to have Tau open a server socket and let good old mining programs connect to it and do getWork calls?
- In the whitepaper about Tau, you discuss briefly using the block index of a PoW blockchain to serve as the arrow of time. This is a powerful idea but it raises the question of how fundamental the blockchain is to the design. Would you use a blockchain that is implemented independently to Tau as the blueprint for time, and allow Tau logic to query time using a hardcoded built-in? Or would you let the Tau programs query other Tau programs that implement a blockchain node and use whatever they say is the latest index as a time reference? The former would be a more global, reliable and trustable reference but would also add complexity to the base Tau framework and raise the question of how to incentivize the miners of the fundamental chain.
- In your latest block post as well as many of your answers, you seem to be saying that some of the Tau logic will be somewhat stored and exchanged over a blockchain which will allow nodes to share and publish knowledge. So would that chain be a native chain (not implemented in Tau logic but C++ for instance) or are we talking about a chain that would be created in the Tau bootstrapping file? Is this chain related to Agora or not? Does this chain have economic incentives? If it does what is it? If not how do you ensure that the chain remains consistent?
- Last but not least you make the parallel in your last blog article between chains and contexts. For example you talk about a "main chain" to refer to some sort of default context. Is the "chain" term in that article still referring to the "chain" in blockchain or is "chain" a linked-list of context elements multiplexed in physical blocks of a real blockchain?

I could go on, but the main point here is that it's not at all clear (not to me at least) at which level(s) you are planning to introduce blockchains, what it/they will be used for at each specific level, what type of incentive scheme will be introduced to keep these structures consistent accross the network, and how that relates to Tau and/or Agora?

I would appreciate if you can address the points above one by one in your answer.
hero member
Activity: 897
Merit: 1000
http://idni.org
hero member
Activity: 897
Merit: 1000
http://idni.org
A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?


Ron revived a very old fb group about tau, and I thank him for that. It doesn't mean it's going to be the main location of the happening. The main locations are still BTT, idni.org, github and IRC.
newbie
Activity: 50
Merit: 0
A metaphore for all the posing, pretending, attention seeking, courting and other superficial and emotionally immature behaviors that this plateform encourages.

Can you please cross post here or on Reddit the discussion about the Nomic game so that the more privacy inclined among us can participate?
hero member
Activity: 897
Merit: 1000
http://idni.org
It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.

indeed tau is the opposite and comes to replace those systems, but, masquerade??
newbie
Activity: 50
Merit: 0
It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
Facebook? Seriously!? Please don't ask us to partake in this masquerade. Facebook is the complete opposite of everything that most people in the crypto, FOSS and hacking scenes stand for, and the first thing I hope Tau will help us get rid of for the sake of the collective mental health of our society.
hero member
Activity: 897
Merit: 1000
http://idni.org
hero member
Activity: 897
Merit: 1000
http://idni.org
What kind consensus mechanism will be used for the Tau-chain?

It is for the users to decide. See last paragraph in my last comment, and I've just had a discussion about it here https://www.facebook.com/groups/870781236307340/permalink/994080593977403/
hero member
Activity: 763
Merit: 500
What kind consensus mechanism will be used for the Tau-chain?
hero member
Activity: 897
Merit: 1000
http://idni.org
newbie
Activity: 50
Merit: 0
Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.
Giving an integrated view is useful and provides some perspetive, but only if the rest is clear enough to start with and allows people to step back and contemplate the whole thing. For this reason I'd like to keep high level answers separated from answering specific questions. Although this will introduce a bit of redundance, can we try to do go with that flow:
- answer questions in quote-and-answer style
- provide some perspective after the questions when applicable

Putting some more structure on this thread is going to cost you a bit of time now, but it's a worthwhile investment as this will allow potential contributors to jump in not to mention that many investors are probably waiting on the sideline until they get some kind of understanding of what Tau is really about. Even for those potential investors who won't understand the details, it still helps to sense that there is a wider understanding in the community and a growing consensus that Tau is achievable and standing on a sound theorical base.

Beside, when the time comes to write some documentation, you'll have some content ready to copy-paste in this thread.

1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
This really depends on which context you place yourself in. The logic itself is only concerned with describing and inferring relations between symbols in a mechanical, provable and objective way. It's entirely disconnected from the world. To be able to use the logic process actual knowledge about the world you need to add semantics to non-logical symbols via an interpretation. For that purpose, defining precisely the vocabulary and ontologies is of upmost importance for everyone interested in making any practical real-world use of Tau even if this has no bearing on the way the Tau is going to work on a logical level. Sensors and effector built-ins are even more important because they are connecting the logic to the state of the world and allowing it to query or affect that state.

You could complete the core of Tau without ever worrying about the way it's going to interface with the world, but then what you are going to release is an autistic sollipsistic reasoner that has no practical use other than making and verifying constructive proofs. We all know that's not the plan so you will have to specify, develop and document a standard library for Tau that will allow it to interract with the world in a way that is well enough standardized to allow consensus on its semantics and effects. Until this is done you will have 0% adoption because Tau will be essentially useless. You could wait longer to start worrying about that but then you lose a lot of benefits such as showing real world examples of what Tau can do, and having contributors help you with developing that layer.

2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
I second that. Built-ins are the weak point in the system because they need to run as native code (as opposed to Tau logic) that can't be formally tested by the logic. The more builtins there are the more attack surface there is to introduce backdoors and/or find zero-day exploits. Built-ins should also be entirely orthogonal, meaning that there should be one and only one way do something and everybody should be using that way. Everything else should be done in Tau. Actually, if you have a good JIT system that allows to run Tau code at a decend performance level, I would even advise against using built-ins for the DHT. Basic network and cryptographic built-ins are sufficient to restore the full DHT functionnality intrinsically. Whenever a built-in is needed for performance, what you could do is let people decide in the bootstrap file if they want to use the native built-ins or the pure logic version. You could also test the native built-ins using a Tau-level model. After sufficient unit test runs against the boundary and median values of the input parametes range of definition as well as randomized tests, one could decide (in the "belief" sense) with enough confidence that  the native built-in and the pure logic are indeed equivalent. That would allow to even reimplement as native code pieces of logic with no side-effect but a pretty heavy computational profile (cryptographic operations come to mind).

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.
This was implied in the question. The context of the question, the whole article, and HMC's answer assume that a blockchain that supports pseudo-turing-complete contracts has been implemented over Tau and we are discussing how contracts that in Ethereum would need to be preempted by the miner upon exhausting their resources could be made (or proven) to yield by themselves in Tau and resume on next block. So the question is how do you do that. Actually the question is more complex than that, so the best would be to quote it and start explaining from there.
hero member
Activity: 897
Merit: 1000
http://idni.org
Thanks for the answers and pointers.

From a stylistic perspective however I think the cross-sectional style in which you answer doesn't help clarity as there are bits of answers to every questions laid out in a completly different flow. Although this may seem to make more sense to you given that you have more perspective on how things fit together, people who are less familiar with the subject will have difficulties understanding how your answers address the questions. The numbering you introduce doesn't help since it doesn't map to anything in the set of questions.

Well sometimes like in this case I feel a need to step back and give more bg info. The numbering has nothing to do with the order of the questions.

Another problem of answering cross-sectionnally is that it lets a lot of stuff unanswered. For instance here you haven't replied about:
- the Tau specific builtins that you have already developed and/or specified.
- the discussion about HMC's quote and the way computation is meant to be broken down in code segments which time and space complexity is bounded and known (see the question again for the details) and the way continuation will be handled between these segments.
- the question of how / whether you will have to prove that the proof itself can be verified in polynomial time and how to avoid this requirement leading to an infinite recursion of proofs.

Let's try to stick to the conventional quote-and-answer style going forward. Thanks.

Indeed I missed a few clarifications and will do it right now:

Builtins: it is important to bear in mind two things:
1. Buitins aren't really needed except for IO. Everything else can be described by the logic itself.
2. Builtins should be quite minimal. The rationale is that a client will have to be able to replay the chain from genesis, so the root chain must contain code that the clients support.
That said, we will ship tau with some basic builtins (let along DHT and chain), like basic math, logic and string operations, as common in reasoners. The currently supported "small types" (which are again just for convenience like builtins) are XML types: XSD_INTEGER, XSD_STRING etc. So tau will have builtins to support basic operations on such.

Computation on chain: What HMC mentioned is with respect to computation that is being done on the chain only, like a computation that is shared by everyone trustlessly. Application that runs locally and uses the chain only for certain operations (like a btc wallet) simply runs on the local machine and does not need to wait for the next block in order to continue local execution.
Recovering Turing completeness at this scope relies on the notion that Turing completeness can be recovered from tfpl by successive calls to it. Therefore, over time, computation on chain also becomes Turing complete.

Verification time: Neither the reasoner nor the verifier can ever get into infinite loop (due to Euler path detection). The proof can be verified very quickly since it is written as a list of terms derived from each other, while annotating the rule and the substitution that yielded this derivation. Therefore it is simply linear.
newbie
Activity: 50
Merit: 0
Thanks for the answers and pointers.

From a stylistic perspective however I think the cross-sectional style in which you answer doesn't help clarity as there are bits of answers to every questions laid out in a completly different flow. Although this may seem to make more sense to you given that you have more perspective on how things fit together, people who are less familiar with the subject will have difficulties understanding how your answers address the questions. The numbering you introduce doesn't help since it doesn't map to anything in the set of questions.

Another problem of answering cross-sectionnally is that it lets a lot of stuff unanswered. For instance here you haven't replied about:
- the Tau specific builtins that you have already developed and/or specified.
- the discussion about HMC's quote and the way computation is meant to be broken down in code segments which time and space complexity is bounded and known (see the question again for the details) and the way continuation will be handled between these segments.
- how / whether you will have to prove that the proof itself can be verified in polynomial time and how to avoid this requirement leading to an infinite recursion of proofs.

Let's try to stick to the conventional quote-and-answer style going forward. Thanks.
hero member
Activity: 897
Merit: 1000
http://idni.org
Jump to: