Pages:
Author

Topic: Tau-Chain and Agoras Official Thread: Generalized P2P Network - page 96. (Read 309654 times)

hero member
Activity: 1008
Merit: 500
Adding more team members is an issue, because:

1.) It's going to be extremely find the same level of programmers as Ohad.

2.) There is a possibility they could deviate from the original vision.
ImI
legendary
Activity: 1946
Merit: 1019

As much as i agree with adding other members to the team, i disagree on another point: please don't compare this project with Golem. Golem is a joke compared to this. And not even a functional joke as they plan to have their final version years from now.
sr. member
Activity: 478
Merit: 250
The thing that strikes me the most about this project is that its a "One man show".
Its not surprising that this project runs for about 2.5 years now (counting Zenet era)
In crypto time is of the essence, especially now when Golem is making such big noises....I strongly recommend
to approach other members to the team....
legendary
Activity: 1176
Merit: 1000
Huge crash price in bittrex, 3btc dump ,, time to add buy support !
newbie
Activity: 34
Merit: 0
Someone dumped possession. Someone got cheap coins.
hero member
Activity: 1008
Merit: 500
I been using the wallpaper from the OP on my site quite a while ago. Could be found in google image search, but in the OP its cut to ugliness. OP is cutter.

it's a real photo btw. a microscoping scan of human neurons

This is non-related to what I said. This photo has been dashing crypto OPs for quite a while. Internet is bigger than google image, it has more similar images. But cutter put up this one.

It is in the vain of open source, this is also an open source image. I do like the graphic design behind it.
hero member
Activity: 2147
Merit: 518
I been using the wallpaper from the OP on my site quite a while ago. Could be found in google image search, but in the OP its cut to ugliness. OP is cutter.

it's a real photo btw. a microscoping scan of human neurons

This is non-related to what I said. This photo has been dashing crypto OPs for quite a while. Internet is bigger than google image, it has more similar images. But cutter put up this one.
hero member
Activity: 897
Merit: 1000
http://idni.org
I been using the wallpaper from the OP on my site quite a while ago. Could be found in google image search, but in the OP its cut to ugliness. OP is cutter.

it's a real photo btw. a microscoping scan of human neurons
hero member
Activity: 2147
Merit: 518
I been using the wallpaper from the OP on my site quite a while ago. Could be found in google image search, but in the OP its cut to ugliness. OP is cutter.
sr. member
Activity: 300
Merit: 250
I'm so excited about this project. Just wanted to bump it.
hero member
Activity: 897
Merit: 1000
http://idni.org
Omni's decentralized exchange begins to function https://blog.omni.foundation/2016/12/01/omniwallet-updates-omnidex-trading-interface-launched/
of course our tokens are traded there as well
(i wrote to them regarding the old tokens displayed there)
hero member
Activity: 897
Merit: 1000
http://idni.org
good answers. and i'm not a Dr. Wink
hero member
Activity: 1008
Merit: 500
Dr. Ohad hi, great project.

1. Is there a chronogram or timeline for this project? when will it be finished more or less?
2. Where should we safely store IDNI Agoras (AGRS) tokens bought at bittrex exchange?

Thanks in advance,


1. It is done when it's done - I think a rough estimate would be summer/fall 2017.
2. You can store AGRS safely by transfering them to any Bitcoin address that you have the private key to. Later you can import them into the omniwallet.org once the swap to the coin on top of Tau-Chain is being done.

https://www.omniwallet.org/

You can use this wallet to store at Agoras tokens. Smiley
hero member
Activity: 1039
Merit: 510
Dr. Ohad hi, great project.

1. Is there a chronogram or timeline for this project? when will it be finished more or less?
2. Where should we safely store IDNI Agoras (AGRS) tokens bought at bittrex exchange?

Thanks in advance,


1. It is done when it's done - I think a rough estimate for Tau to reach Genesis would be summer/fall 2017.
2. You can store AGRS safely by transfering them to any Bitcoin address that you have the private key to. Later you can import them into the omniwallet.org once the swap to the coin on top of Tau-Chain is being done.
newbie
Activity: 26
Merit: 0
Dr. Ohad hi, great project.

1. Is there a chronogram or timeline for this project? when will it be finished more or less?
2. Where should we safely store IDNI Agoras (AGRS) tokens bought at bittrex exchange?

Thanks in advance,
AEA
newbie
Activity: 36
Merit: 0
Sounds amazing, looking forward to further updates Ohad.
hero member
Activity: 897
Merit: 1000
http://idni.org
thanks, some more and less important notes:

Catching up with latest developments. I've read some of the chat logs and the heated exchange between Ohad and HMC.

There are good arguments on both sides. HMC is right that MLTT + TFPL is a more conservative choice since it's already in use in a flurry of theorem provers and formal verification frameworks and languages.

i wouldn't necessarily agree with that since there are also many msol-based model checkers (mona, libvata etc) aside many other model checkers for subsets of msol (LTL, CTL, mu-calculus [which is equi-expressive with msol in some cases, and less expressive in other] etc). cf https://en.wikipedia.org/wiki/List_of_model_checking_tools

He is also right that MSOL's limitations wrt to arithmetic *could* (theoretically but no example is known) turn out to be problematic to express some categories of computations although it's unclear if that would really be a problem for practical use of Tau.

should distinguish between what we can compute and what we can assert. indeed those two are same under dependent types. but for our msol setting, indeed msol cannot express parts of arithmetic, but is just the requirement language. the programming language itself is stlc+y (equivalently finitary pcf) and can express more than all multivariate polynomials over finite sets (for infinite base types *without recursion*  cf https://pdfs.semanticscholar.org/a2ef/cfb7401b3e87ed3e9574183f47e75ae58617.pdf )
hmc's main objection is, as he said, the "lack of general product in stlc+y". but product is just currying.. stlc supports products just fine.

Ohad is right that the TFPL constraints and the non-axiomatic LEM make MLTT + TFPL difficult to program intuitively, and somewhat amputate expressiveness in a way that's difficult to fathom and (seem) undecidable. Both approaches have shortcomings and advantages, and it's an excellent thing that both are being researched so that the project won't be stuck in a rut should one of the paradigms turn out to be unsuitable.

Now, where I'm really puzzled is that this difference of views has led to a complete breakdown of the project team in spite of the fact no party has been able to properly back their claims. There are good arguments on both sides but the so called "critical flaws" pointed by both sides are yet to be proven. Indeed MSOL over trees limitations wrt to arithmetic could be problematic, but is it a real problem? Modern computers are using bounded arithmetic operations over 32 or 64 bit registers, and can all be emulated with basic bit-wise logic operations, in a way which can be entirely flattened / unrolled and require no loop, and can be broken down into several rounds to allow execution over multiple blocks / cycles like it was planned to do with the original Tau design to recover turing completeness. If unbounded iteration can be recovered, why would arbitrary precision arithmetic be unrecoverable when the very thing to be recovered in both case is exactly the same: unbounded-ness? Now, wrt to the claim that type checking as used in MLTT+TFPL is undecidable making it very awkward to create proofs, it's also rather unsubstanciated: it is really undecidable? Without proof, the argument is moot. And even if it really was, does it really make it so awkward to create proofs in practice? Examples would be helpful. Surely the fact it's being used in Idris and Agda should hint to the fact it is at least somewhat usable.

LEM is not so bad, the problem with mltt is undecidability. btw even System F has undecidable typecheck as mentioned eg here http://cs.stackexchange.com/questions/12691/what-makes-type-inference-for-dependent-types-undecidable
how can one have a sound nomic game without decidability? and that's the "critical flaw" with mltt.
proofs done with mltt are very helpful, since the language is consistent. once something is proved, we can rely on it to be true. but not every truth can be proved (or derived) in an undecidable system.

It seems to me that both parties have good arguments, and that at the same time both also jumped the gun and decided that they were right and the other wrong based on incomplete premises propped up by ego matters. What's wrong with you guys? Whatever happened to "this too is why we need Tau"?. How can you promote the use of logic as the central tenet of Tau and yet fail to abide to it at the first occasion where a difference of views arise?

The most puzzling is that the debate has been exclusively centered around the artificially constrained question of who is right, and who is wrong (as if it was an axiom that one had to be right and the other wrong and only one form of calculus could possibly express Tau, forget Curry-Howard isomorphism), all the while ignoring completely the fact that it's entirely possible for both views to coexist happily within the project. Tau is a language, based on RDF, and meant to come with a formally defined grammar and set of ontologies. As HMC has stated earlier in this thread, even OWL-DL can be recovered over Tau which means that, since both are expressed as RDF graphs, OWL-DL can be  transformed in valid Tau code by mapping and graph transforms. Regardless of whether the Tau client uses MLTT+TFPL or MSOL over trees under the hood, it will still have to support OWL-DL and programs written in the latter and that are not undecidable should work. I take OWL-DL as an example, but Tau is a superset of OWL-DL, and everything in this super-set should work equally well on any suitable logic.

by your latter argument we could have any turing complete language and restrict it over time to decidable fragments. but: how is this going to happen? i mean, how are we doing to have collaborative development without decidability? participants raise rules for vote, but as long as we cannot understand what proposals really mean (namely the implications between them, their consequences, the rules they contradict etc), we might avoid the consensus we're looking for. under undecidable yet consistent logic, you can give me proofs for your claims about your code, but decidability is required for me to ask my questions about your code.

The crux of the problem is that nobody knows exactly what is the spec of Tau.
i'm fixing this situation nowadays, writing a new and much better whitepaper

We know that Tau, as a language, is a superset of other languages we know it will have to support to be useful at all, like OWL-DL. But we don't quite know where exactly is its boundary. Is that a fundamental property of Tau to have limited recursion and no axiomatic law of the excluded middle? HMC will tell you it is, but not because it was the requirement or something inherently desirable, but because shoehorning Tau into MLTT+TFPL implies that Tau should have these properties. Same thing on the other side. Ohad will tell you that it should be a fundamental property of Tau to have free recursion and the law of the excluded middle but bounded arithmetic. Obviously, if everyone who comes with a new calculus gets to decide what Tau is so that it works with it, we are going to go round in circles forever. Tau shouldn't be what possible underlying forms of logic imply it should be.Tau should be that and only that which is required to express the family of programs that are both decidable and able to complete within bounded space (memory) and time on modern hardware regardless of whether doing so implies unrolling loops or emulating multiplications. Since "modern hardware" is a moving target, programs should be shipped with the size of the dimensions of the bounding box they are guaranteed to run in, together with other proofs on their properties, and should be able to run on any Tau client using any form of suitable logic. With a properly specified Tau language, you could have some Tau clients using MSOL over trees (assuming its suitable) with others using MLTT+TFPL (assuming its suitable), all capable to communicate over the tau-chain and crunch away happily. If that's not the case we are doing something very wrong, and falling for the classic mistake of retrofitting the requirements to fit the model, instead of creating the model based on requirements.

indeed the design i'm laying now will support many future logics (not only msol/mltt!)

Of course this is easier said than done. As the debate between Ohad and HMC have illustrated, there are multiple ways that infinity can creep in, and different forms of logic enforce decidability by bounding calculus along different axis: recursion, or arithmetic. For a spec to be truly generic, it should include both a high level spec for developers (the language) which should be independent of the underlying logic, and a lower level spec with basic mechanisms to handle multiple forms of calculus which would include a form of generic continuation allowing the underlying logic to create deferred payloads that, stringed together, would allow to recover turing completeness. The underlying logic / implementation would know what to do when it compiles the code and how to package computation so that it runs in bounded time and space intrinsically, while allowing unbounded computation extrinsically with the blockchain used as the one and only carrier for infinity.

@HMC and @ohad: can we bury the hatchet and go back to the drawing board so that Tau will no more be bound to a single form of calculus? With such a model, both MLTT+TFPL and MSOL over trees can be implemented in parallel and trialed by fire. Tau would also become more future proof. After all, who knows: maybe a few years down the line, a new breakthrough in computer science will give rise to yet another form of calculus that could turn out to be even better. That we be too bad if Tau is stuck in a too specific model and fails to evolve.

as above, i'm all for multilogic, and i even have a plan
newbie
Activity: 50
Merit: 0
Catching up with latest developments. I've read some of the chat logs and the heated exchange between Ohad and HMC.

There are good arguments on both sides. HMC is right that MLTT + TFPL is a more conservative choice since it's already in use in a flurry of theorem provers and formal verification frameworks and languages. He is also right that MSOL's limitations wrt to arithmetic *could* (theoretically but no example is known) turn out to be problematic to express some categories of computations although it's unclear if that would really be a problem for practical use of Tau. Ohad is right that the TFPL constraints and the non-axiomatic LEM make MLTT + TFPL difficult to program intuitively, and somewhat amputate expressiveness in a way that's difficult to fathom and (seem) undecidable. Both approaches have shortcomings and advantages, and it's an excellent thing that both are being researched so that the project won't be stuck in a rut should one of the paradigms turn out to be unsuitable.

Now, where I'm really puzzled is that this difference of views has led to a complete breakdown of the project team in spite of the fact no party has been able to properly back their claims. There are good arguments on both sides but the so called "critical flaws" pointed by both sides are yet to be proven. Indeed MSOL over trees limitations wrt to arithmetic could be problematic, but is it a real problem? Modern computers are using bounded arithmetic operations over 32 or 64 bit registers, and can all be emulated with basic bit-wise logic operations, in a way which can be entirely flattened / unrolled and require no loop, and can be broken down into several rounds to allow execution over multiple blocks / cycles like it was planned to do with the original Tau design to recover turing completeness. If unbounded iteration can be recovered, why would arbitrary precision arithmetic be unrecoverable when the very thing to be recovered in both case is exactly the same: unbounded-ness? Now, wrt to the claim that type checking as used in MLTT+TFPL is undecidable making it very awkward to create proofs, it's also rather unsubstanciated: it is really undecidable? Without proof, the argument is moot. And even if it really was, does it really make it so awkward to create proofs in practice? Examples would be helpful. Surely the fact it's being used in Idris and Agda should hint to the fact it is at least somewhat usable.

It seems to me that both parties have good arguments, and that at the same time both also jumped the gun and decided that they were right and the other wrong based on incomplete premises propped up by ego matters. What's wrong with you guys? Whatever happened to "this too is why we need Tau"?. How can you promote the use of logic as the central tenet of Tau and yet fail to abide to it at the first occasion where a difference of views arise?

The most puzzling is that the debate has been exclusively centered around the artificially constrained question of who is right, and who is wrong (as if it was an axiom that one had to be right and the other wrong and only one form of calculus could possibly express Tau, nevermind the Curry-Howard isomorphism), all the while ignoring completely the fact that it's entirely possible for both views to coexist happily within the project. Tau is a language, based on RDF, and meant to come with a formally defined grammar and set of ontologies. As HMC has stated earlier in this thread, even OWL-DL can be recovered over Tau which means that, since both are expressed as RDF graphs, OWL-DL can be  transformed in valid Tau code by mapping and graph transforms. Regardless of whether the Tau client uses MLTT+TFPL or MSOL over trees under the hood, it will still have to support OWL-DL and programs written in the latter and that are not undecidable should work. I take OWL-DL as an example, but Tau is a superset of OWL-DL, and everything in this super-set should work equally well on any suitable logic.

The crux of the problem is that nobody knows exactly what is the spec of Tau. We know that Tau, as a language, is a superset of other languages we know it will have to support to be useful at all, like OWL-DL. But we don't quite know where exactly is its boundary. Is that a fundamental property of Tau to have limited recursion and no axiomatic law of the excluded middle? HMC will tell you it is, but not because it was the requirement or something inherently desirable, but because shoehorning Tau into MLTT+TFPL implies that Tau should have these properties. Same thing on the other side. Ohad will tell you that it should be a fundamental property of Tau to have free recursion and the law of the excluded middle but bounded arithmetic. Obviously, if everyone who comes with a new calculus gets to decide what Tau is so that it works with it, we are going to go round in circles forever. Tau shouldn't be what possible underlying forms of logic imply it should be.Tau should be that and only that which is required to express the family of programs that are both decidable and able to complete within bounded space (memory) and time on modern hardware regardless of whether doing so implies unrolling loops or emulating multiplications. Since "modern hardware" is a moving target, programs should be shipped with the size of the dimensions of the bounding box they are guaranteed to run in, together with other proofs on their properties, and should be able to run on any Tau client using any form of suitable logic. With a properly specified Tau language, you could have some Tau clients using MSOL over trees (assuming its suitable) with others using MLTT+TFPL (assuming its suitable), all capable to communicate over the tau-chain and crunch away happily. If that's not the case we are doing something very wrong, and falling for the classic mistake of retrofitting the requirements to fit the model, instead of creating the model based on requirements.

Of course this is easier said than done. As the debate between Ohad and HMC have illustrated, there are multiple ways that infinity can creep in, and different forms of logic enforce decidability by bounding calculus along different axis: recursion, or arithmetic. For a spec to be truly generic, it should include both a high level spec for developers (the language) which should be independent of the underlying logic, and a lower level spec with basic mechanisms to handle multiple forms of calculus which would include a form of generic continuation allowing the underlying logic to create deferred payloads that, stringed together, would allow to recover turing completeness. The underlying logic / implementation would know what to do when it compiles the code and how to package computation so that it runs in bounded time and space intrinsically, while allowing unbounded computation extrinsically with the blockchain used as the one and only carrier for infinity.

@HMC and @ohad: can we bury the hatchet and go back to the drawing board so that Tau will no more be bound to a single form of calculus? With such a model, both MLTT+TFPL and MSOL over trees can be implemented in parallel and trialed by fire. Tau would also become more future proof. After all, who knows: maybe a few years down the line, a new breakthrough in computer science will give rise to yet another form of calculus that could turn out to be even better. That we be too bad if Tau is stuck in a too specific model and fails to evolve.

We are all in the same boat, hoping to have a way to unleash logic upon the world and enable a new era of global rationality. It would be a pity if that grand vision didn't come true due to petty cat fights between project founders. You are two smart guys, and I know you know what really matters, and that this doesn't include petty ego matters.
hero member
Activity: 897
Merit: 1000
http://idni.org
Can you provide your description of SAFE?

Is it not simply a protocol that eliminates servers and farms and thus provides a platform written in Rust to provide true decentralization? Then other apps, languages platforms can be written on-top of it but have the benefit of NO SERVERS.

There is actually no blockchain.

Tau needs a blockchain?

assume we build tau on safe and after some time we want to change safe's protocol itself (cause we found a bug or simply cause we have a better one) -- that wouldn't be possible, as safe isn't self-defining. that'd be one example. and yes tau needs a blockchain to contain the definition of the protocol at any point of time
hero member
Activity: 1008
Merit: 500
Golem ICO starting on 11 Nov. Golem endeavors to create a distributed computing platform similar to Zennet on Ethereum. Any opinion on the relative worth of this project?

to be honest i dont think that ethereum will hold for too long due to its turing complete nature. indeed golem evolved into something that looks like specific cuts from zennet

you are saying, you are working with turing complete nature systems too. so its not really a counterargument.

tau is not turing complete.


Tau is a language right?

So why not run it on Safe Network.

After maid safe opens up why even have a blockchain?

there's a big difference between maidsafe and tau. it'll be much clearer on the upcoming paper


Can you provide your description of SAFE?

Is it not simply a protocol that eliminates servers and farms and thus provides a platform written in Rust to provide true decentralization? Then other apps, languages platforms can be written on-top of it but have the benefit of NO SERVERS.

There is actually no blockchain.

Tau needs a blockchain?






Blockchain creates immutability of the network itself and a public record Smiley
Pages:
Jump to: