Pages:
Author

Topic: Turing completeness and state for smart contract - page 3. (Read 12240 times)

sr. member
Activity: 381
Merit: 255
gmaxwell is the point of payment channels/level-2 layers not to enable more advanced uses of "smart-contracts" ?

From my understanding the Script language will be used to form contracts, but all computation of such will sit outside the blockchain itself.

What I am missing is however the smart-contract construction on this layer and how it will look exactly. Is MAST going to be required before we have a "real" smart contract interface?
staff
Activity: 4284
Merit: 8808
On the pedantic points, I echo what tucenaber just said-- and I could not say it better.  (Also, see #bitcoin-wizards past logs for commentary about total languages. I also consider that a major useful point for languages for this kind of system).

People looking for "turing complete" smart contracts inside a public cryptocurrency network are deeply and fundamentally confused about what task is actually being performed by these systems.

It's akin to asking for "turing complete floor wax".   'What does that? I don't even.'

Smart contracts in a public ledger system are a predicate-- Bitcoin's creator understood this. They take input-- about the transaction, and perhaps the chain-- and they accept or reject the update to the system.   The network of thousands of nodes all around the world doesn't give a _darn_ about the particulars of the computation,  they care only that it was accepted.  The transaction is free to provide arbitrary side information to help it make its decision.

Deciding if an arbitrarily complex condition was met doesn't require a turing complete language or what not-- the verification of a is in P not NP.

In Bitcoin Script, we do use straight up 'computation' to answer these questions; because that is the simplest thing to do, and for trivial rule sets, acceptably efficient.  But when we think about complex rule-- having thousands and thousands of computers all around the world replicate the exact same computation becomes obviously ludicrous, it just doesn't scale.

Fortunately, we're not limited to the non-scalablity-- and non-privacy-- of making the public network repeat computation just to verify it.  All we have to do is reconize that computation wasn't what we were doing from the very beginning, verification was!

This immediately gives a number of radical improvements:

"The program is big and I don't want to have to put it in the blockchain in advance." ->  P2SH, hash of the program goes into the public key, the program itself ends up being side information.

"The program is big but we're only going to normally use one Nth of it-- the branches related to everything going right"  -> MAST, the program is decomposed into a tree of ORs ans the tree is merkelized. Only the taken OR branches ever need to be made public; most of the program is never published which saves capacity and improves confidentiality.

"The program is big, and there are fixed number of parties to the contract. They'll likely cooperate so long as the threat of the program execution exists."  -> Coinswap transformation; the entire contract stays outside of the blockchain entirely so long as the parties cooperate.

"The program is big, and there are fixed number of parties to the contract, and I don't care if everything just gets put back to the beginning if things fail." -> ZKCP; run _arbitrary_  programs, which _never_ hit the blockchain,  and are not limited by its expressive power (so long as it supports hash-locked transactions and refunds.)

"The program is kinda big, and we don't mind economic incentives for enforcement in the non-cooperative case"  -> challenge/response verification; someone says "I assert this contract accepts," and puts up a bond. If someone disagrees, they show up and put up a bond to say it doesn't. Now the first party has to prove it (e.g. but putting the contract on the chain) or they lose their bond to the second party, if they're successful they get the bond from the second party to pay the cost of revealing the contract.

"The program is too big for the chain, but I don't want to depend on economic incentives and I want my contract to be private." ->  ZKP smart contracts; PCP theorem proves that a program can be proved probabilisticly with no more data than log the size of its transcript.  SNARKS use strong cryptographic assumptions to get non-interactive proofs for arbitrary programs which are constant size (a few hundred bytes). Slowness of the prover (and in the case of snarks, trusted setup of the public key-- though for fixed sets of participants, this can be avoided) limit the usefulness today but the tech is maturing.

All of these radical improvements in scalablity, privacy, and flexibility show up when you realize that "turing complete" is the wrong tool, that what our systems do is verification, not computation.  This cognitive error confers no advantage, outside of marketing to people with a fuzzy idea of what smart contracts might be good for in the first place.

More powerful smart contracting in the world of Bitcoin will absolutely be a thing, I don't doubt. But the marketing blather around ethereum isn't power, it's a boat anchor-- a vector for consensus inconsistency and decentralization destroying resource exhaustion and incentives mismatches. Fortunately, the cognitive framework I've described here is well understood in the community of Bitcoin experts.
sr. member
Activity: 337
Merit: 252
There will be no public block chain with Turing complete scripts ever, because that would mean that absolutely any algorithm would be accepted in a script, including non-terminating ones. A smart contract with an infinite loop would be a very bad thing.

But, but Etherium is Turing complete, isn't it???  No it is not, it can't be. It would be killed by DOS in no time. The language may be Turing complete in theory, but since the system must guarantee that all scripts terminate in finite time (I think they do it by requiring that each instruction has a cost) it is not Turing complete in practice. They could have used a total language in the first place, where you have a guarantee that every program terminates.

I know this isn't really an answer to the OP question but I wanted to point it out Wink
sr. member
Activity: 353
Merit: 253
So the "question" now might be:

is it better to have to different chains, i.e. bitcoin and rootstock, with the second being a superset, in terms of functionalities, of the first, or is it best to just have the second (so just ethereum)?

I tend to think that is better to split them, but I don't have any real evidence to support that assertion.

Some thoughts however:
pros (for splitting): two different chains can be tweaked differently (e.g. block time) to achieve different objectives.
cons (for splitting): one single chain is easier to mantain, to debug, to develop, to upgrade compared to two (this may be a strong cons).

What do you guys think however?
sr. member
Activity: 353
Merit: 253
If it becomes useful, I imagine there will soon be a eth-like sidechain on bitcoin.

Maybe a sidechain which is merge-mined with bitcoin. I don't see that like an impossible outcome...
at that point, what would the usefulness of Ethereum be?



By the way, it already exists. It's called rootstock. How is it that some people see it as a joke?

sr. member
Activity: 432
Merit: 251
––Δ͘҉̀░░
If it becomes useful, I imagine there will soon be a eth-like sidechain on bitcoin.
sr. member
Activity: 412
Merit: 287
The simple answer is no - what you suggest will never happen. Despite both being blockchains, they have nothing to do with eachother - they have totally different designs and goals.
sr. member
Activity: 353
Merit: 253
If ethereum technology becomes fondamental and useful, would it be wise to modify bitcoin to incorporate its fundamental changes which are the turing completeness and the possibility for contracts to have a state? (as well as a easier scripting language as serpent).

Best regards,
ilpirata79
Pages:
Jump to: