Pages:
Author

Topic: Layman's Journey to Understanding Zerocash - page 2. (Read 3528 times)

legendary
Activity: 996
Merit: 1013
December 14, 2015, 06:16:31 AM
#19

Wright explicitly stated that he wasn't claiming the script can loop, but that factors orthogonal to the scripting can implement the paradigm of looping. His point is the Bitcoin block chain stack is already Turing-complete. And on that point he is correct, because the entropy of the block chain state is unbounded. Anything that can be done with Ethereum due to Turing-completeness which is not an attack on scripting node resources, can also theoretically (in terms of asymptotic computational complexity theory) be done on the Bitcoin block chain.

I agree with the last sentence quoted here. What Ethereum
VM provides is a succinct and unified way of writing smart
contracts, but these can be implemented on Bitcoin blockchain
as well. It's just more laborious and leads to a profiliteration
of standards (colored coins, Mastercoin etc)

But my point was that even then, the contractual logic and
reading/writing data on the blockchain and Turing-completeness
happens on a higher level than that of scripting. (edit: thinking
about it, contractual logic can and should be executed in script, but would need
blockchain-derived data handed down to it by the client or user)


Based on the transcription snippet,
Craig Wright seems to envision a separate control stack that is
used by the script, and I doubt he had the blockchain in mind.
sr. member
Activity: 420
Merit: 262
December 14, 2015, 05:48:26 AM
#18

At roughly the 17 minute mark in this conference video, Wright correctly explains that due to unbounded recursion, the Bitcoin block chain scripting is effectively Turing-complete. Afaics, he is entirely correct and Nick Szabo is wrong, because although the scripting language stack can't loop within one transaction, one can use multiple transactions to simulate looping. This is precisely the point I made in my recent post wherein I explained that under composition it is impossible to prevent unbounded recursion and thus unbounded entropy. Review the Dr. Suess proof of Turing-completeness. It doesn't matter what the called script answers, the calling script can always change the outcome. If you have the ability to store state on the block chain across multiple invocations of a script, then the block chain becomes the stack. Nick Szabo just demonstrated to me that he isn't as smart as I thought. Dr. Wright makes a relevant point that many people these days seem to forget that in machine code there is no virtual machine that controls what the instruction set can do in terms of which memory it can treat as a stack. Bitcoin's script instruction set can be viewed as machine code w.r.t. to its ability to read and store state any where in the memory space of the block chain UTXO.


You can use the blockchain as a stack (using embedded metadata),
but then it is being manipulated by the client, not by the script.
The script works in the context of a single transaction and does not
know about the blockchain.

Just as a single transaction is a part of the entire blockchain,
the script engine is a submodule of the client. They are on
different logical levels*. The client can loop, the script can't.


_____
*) I use the term "logical level" in the same sense as Russell
and Whitehead did when they resolved the set-of-all-sets paradox
by establishing that it makes no sense to call the collection of
all sets a set, as it is on "higher" logical level or context.


Wright explicitly stated that he wasn't claiming the script can loop, but that factors orthogonal to the scripting can implement the paradigm of looping. His point is the Bitcoin block chain stack is already Turing-complete. And on that point he is correct, because the entropy of the block chain state is unbounded. Anything that can be done with Ethereum due to Turing-completeness which is not an attack on scripting node resources, can also theoretically (in terms of asymptotic computational complexity theory) be done on the Bitcoin block chain.

But remember from my Overview section that intuitions from computational complexity theory are not an absolute guarantee of how it works out in the real world implementation.

So there are real world implementation differences in efficiency, practicality, atomicity, and storage/state per transaction comparing Bitcoin and Ethereum. And I don't claim to be very knowledgeable about the scripting capabilities of either rather I am just speaking from a high level conceptual view.

For example, there are some protections gained from making the scripting deterministic (bounded with provable termination), such as the verification nodes can be sure the script will terminate (but even that alone doesn't prevent the script from consuming too many verification resources). Specifically the Bitcoin scripting node algorithms are (in a non-boolean variant of) P. So in that sense there is asymptotic complexity distinction between Bitcoin and Ethereum, except that Ethereum uses "gas" to be sure scripts terminate so it is in P also (and thus the Ethereum scripts are not Turing-complete either apparently but don't quote me on that because I haven't studied Ethereum's scripting)[on further thought Ethereum scripts are in some variant related to RP complexity because the termination due to "gas" is an "I don't know" result].

I am stating that Szabo and Maxwell have conflated Turing-completeness with differences in efficiency, practicality, atomicity, and storage/state per transaction.

Conflating an asymptotic attribute (computational complexity) with a set of non-asymptotic attributes (efficiency, practicality, atomicity, and per txn storage/state) is akin to not understanding computational complexity. Which is strange because I know Szabo and Maxwell do both understand. They probably didn't intend to conflate. They've had their minds deep in many issues and just didn't take some moments to deeply think about it. I happened to be writing about this, so my mind was very focused on the distinction.
legendary
Activity: 996
Merit: 1013
December 14, 2015, 04:25:56 AM
#17

At roughly the 17 minute mark in this conference video, Wright correctly explains that due to unbounded recursion, the Bitcoin block chain scripting is effectively Turing-complete. Afaics, he is entirely correct and Nick Szabo is wrong, because although the scripting language stack can't loop within one transaction, one can use multiple transactions to simulate looping. This is precisely the point I made in my recent post wherein I explained that under composition it is impossible to prevent unbounded recursion and thus unbounded entropy. Review the Dr. Suess proof of Turing-completeness. It doesn't matter what the called script answers, the calling script can always change the outcome. If you have the ability to store state on the block chain across multiple invocations of a script, then the block chain becomes the stack. Nick Szabo just demonstrated to me that he isn't as smart as I thought. Dr. Wright makes a relevant point that many people these days seem to forget that in machine code there is no virtual machine that controls what the instruction set can do in terms of which memory it can treat as a stack. Bitcoin's script instruction set can be viewed as machine code w.r.t. to its ability to read and store state any where in the memory space of the block chain UTXO.


You can use the blockchain as a stack (using embedded metadata),
but then it is being manipulated by the client, not by the script.
The script works in the context of a single transaction and does not
know about the blockchain.

Just as a single transaction is a part of the entire blockchain,
the script engine is a submodule of the client. They are on
different logical levels*. The client can loop, the script can't.


_____
*) I use the term "logical level" in the same sense as Russell
and Whitehead did when they resolved the set-of-all-sets paradox
by establishing that it makes no sense to call the collection of
all sets a set, as it is on "higher" logical level or context.

sr. member
Activity: 420
Merit: 262
December 13, 2015, 05:41:22 PM
#16
sr. member
Activity: 420
Merit: 262
December 13, 2015, 12:52:08 PM
#15
YarkoL, writing my response to you caused me to find several errors in the post you were commenting on. I had made further improvements to the prose based on your feedback, which I think make it easier to comprehend. On re-reading my prose, I observed numerous weak grammatical and semantic constructions, as well as incorrect logic illuminated by the correct logic I made in my reply to your comment about containment rule. Thank you.

Wekkel et al, thank you for reading. Hoping this thread will help laymen appreciate some of the complexity of the technology in cryptography. And appreciate from some level of understanding rather than just as a magical blackbox.
legendary
Activity: 3108
Merit: 1531
yes
December 13, 2015, 11:07:49 AM
#14
looking forward to see where you wind up in your journey  Grin
sr. member
Activity: 420
Merit: 262
December 13, 2015, 09:54:06 AM
#13
Wow, I actually understood some of that. But only because I've
read some popular books dealing with limits of computability,
otherwise it would been all Greek to me.

Hopefully not every post will be as deep as that one especially at the end of this journey once I have the clarity of mastery of the subject matter. Even I did try to write in common language terms as much as possible but not to the n00b extreme of Randal Munroe's Thing Explainer, lol.

I hope to eventually have some posts where I summarize without so much technobabble terminology. I am first making sure I understand the subject matter. Forcing myself to write it down, forces me to catch my mental errors. I typically like to do my learning all conceptually in my head, but I gain clarity and consistency of my thoughts when serializing them out to a written format (which is a love and hate task...hate because so slow to articulate details and I'm not a highly gifted writer...love because it advances my knowledge and is a social sharing given I am a gregarious extrovert with the capability to go introvert when programming).

So to sum it up we have seen a very in-depth exposition of
theoretical underpinnings behind one-way functions. Interesting
to see what all this is leading towards.

I assume you are aware that post is not done yet, and I still need to cover the remaining computational complexity classes. You are speaking to what I wrote thus far mostly the Determinism section and the part in the NP section about hash functions.

Getting a little bit philosophical,
Quote
More abstractly undecideability results from any rule of containment that can point to itself (c.f last two paragraphs of linked answer), because this unbounded recursion opens the generality to the universe of possibilities where the entropy is unbounded  

would you agree that the "rule of containment that can point to itself" is equal to saying
"a containment so rich, that it can model all rules"

Bertrand Russell's Paradox essentially points out that a set that has the containment rule that requires it contain any sets which contain it, can't exist unless it is globally total/complete, meaning all possible sets with such rules must contain each other. Thus would require a static world that can't ever change. A static world can't exist (for numerous reasons). Russel's Paradox is similar to the paradox of perfectly dependently typed programming languages which can model all the dynamic runtime dependencies of the program in the static type checking done at compile time of the source code. Problem is that only the expected inputs and outputs are allowed. But under composition of modules, new permutations of inputs and outputs are created from the sets of the modules and these permutations weren't modeled by the modules when they were separated compiled. This is what I meant in the section where I was writing about Philip Wadler, S-modularity, the Expression Problem, and dependently typed languages. You can click the links in that section to dig more into that. For layman, I am not going to try to explain that to them from first principles such as “what is a type in a programming language”. That is just too much verbiage and I doubt they would benefit from it (they can always go digging with Google if they want to deepen their understanding of the technobabble terms). They need summaries.

Gödel's incompleteness theorem says that all arithmetic truths which are complete, are thus inconsistent. This is conceptually the same as Russell's Paradox in that a rule which attempts to contain everything that can itself is impossible unless it can contain everything, which can't exist.

All of this is stating that we live in a relativistic world (i.e. where each thing can only make rules about its perspective but not a total knowledge) by necessity otherwise change wouldn't exist and thus nothing would exist. Socialism (a total rule not allowing for differing perspectives) is a denial of the existence of relativity and thus doomed to catastrophic failure. I have pointed out in my writings that the omniscience required for a centralized knowledge set would require the speed-of-light to be infinite (so that all real times actions can be centrally decided without causing the local actors to pause and wait), but if the speed-of-light were infinite, then the past and the present would have no separation in spacetime and the universe would collapse into an infinitesimal point. Thus decentralization, decentralized version control, decentralized decision making, and open source are not philosophical decisions for me, rather they are harmonizing with the reality of the physics of existence which requires decentralized change and relative perspective. Note CoinCube (who has a degree in Applied Math as well as medical degree) and I had some discussions about this in the Economic Devastation thread and he pointed out some math about how a totally undamped (i.e. underdamped in the differential equations sense of a model on oscillation) decentralization spawns mutations too fast to anneal on any optimization. This is pointing out that we still need leadership. Decentralization is positive but up to a limit of the information capacity for change. I will hope to PM CoinCube soon to see if he can come over to this thread. I doubt he is aware I created it.

But both Russell's Paradox and Gödel's incompleteness theorems afaics model containment per how it is defined. So if the test for containment is enumeration then the limitations of total containment apply to enumeration. If the definition of containment is paradigm shifted to some attribute other than enumeration, perhaps we can get more power into what can be put in an an enumerated container. By more power I mean that if an application gains from enumeration and containment can be defined orthogonal to enumeration, then the enumeration aspects can perhaps gain additional degrees-of-freedom. A new concept I am learning about is Super-Turing computation and Hypercomputation.

In discrete computation even the reals or natural numbers are unbounded. In an analog computation described in functional limits, perhaps not so. I am not mathematical enough to speak about that until I've studied it.

The shift from discrete to analog can shift the assumptions about computation. For example (and I mentioned this in the Overview post) the quantum computing model machine breaks number theoretic intractability for problems relying on factoring by employing Shor's algorithm which (in a butchered summarization) uses parallelized (due to quantum mechanics) Fourier analysis to shift the potential factors from the time to the frequency domain where the solution space is reduced to polynomial, e.g. RSA and ECC security reduced from exponential to polynomial time.

It appears that Super-Turing computation shifts the model of inclusion from discrete to analog or distributed fuzzy logic. I need to delve into it more before I will fully understand the implications and viability. Btw, I noted the prominent leading researcher Siegelmann is apparently female.

What I expect to remain the generative essence is that any rule of containment that contains everything that can contain itself (and note I need to correct what I wrote in the post to this improved wording) that does not leak in the type of the computation model employed, would need to be so rich as to model everything that can be modeled, and thus can't exist because the world is open to unbounded composition.

Tangentially I believe my improvement to Satoshi's design essentially hinges on the similar kind of paradigm shift where the containment test for double-spends shifted to an orthogonality via a separation-of-concerns.

That is why I warn people never to underestimate me. Who knows before this thread is done, I may have solved the open $1 million bounty problem of whether P equals NP. (I already have some ideas rolling in my head of how to paradigm shift the question. Even one of the prominents in the field pointed out that any meaningful advance would provide paradigmatic shifts of insight into complexity theory otherwise it wouldn't be interesting and useful). I encourage everyone to be creative. I don't claim any special ownership of creativity. I enjoy the creativity of others as well. Creativity comes from being in the right place at the right time (such as happening to read something that gives you a Eureka) and being busy on learning and discovering new things improves the chances/opportunities for creativity to occur.
legendary
Activity: 996
Merit: 1013
December 13, 2015, 07:18:58 AM
#12
Wow, I actually understood some of that. But only because I've
read some popular books dealing with limits of computability,
otherwise it would been all Greek to me.

So to sum it up we have seen a very in-depth exposition of
theoretical underpinnings behind one-way functions. Interesting
to see what all this is leading towards.

Getting a little bit philosophical,
Quote
More abstractly undecideability results from any rule of containment that can point to itself (c.f last two paragraphs of linked answer), because this unbounded recursion opens the generality to the universe of possibilities where the entropy is unbounded 

would you agree that the "rule of containment that can point to itself" is equal to saying
"a containment so rich, that it can model all rules"
sr. member
Activity: 420
Merit: 262
December 13, 2015, 05:03:26 AM
#11
I had a huge error in the NP section which has now been corrected. I was conflating as NP, the computational complexity class of EXPTIME coupled with P verification. The broad conceptual analogy was correct (in my groggy head), but I had previously messed up the articulation which is now fixed. Also I have briefly explained why that kind of complexity class is the paradigm for public key cryptography. Note I discovered this error on my own. I was suffering from lack of sleep before (recurrent issue for me at my current age 50+ and how many excessive hours a day I am working).
sr. member
Activity: 420
Merit: 262
December 13, 2015, 03:20:33 AM
#10
I've made some edits to the prior post and I think the section on Determinism is more complete and more accurate now.
sr. member
Activity: 420
Merit: 262
December 12, 2015, 06:12:33 PM
#9
Computational Complexity Classes: P, NC, NP, E, #P, ⊕P

P, NP, E, #P, ⊕P, NC are some of the computational complexity classes that are upper bounded by 1) type of result, 2) machine model employed for the computation, and 3) polynomial time (algorithmic steps) consumption.

ClassResult type
Machine model
legendary
Activity: 966
Merit: 1000
December 12, 2015, 02:21:07 PM
#8
Really beautifull technical explanation, lots of thanks for the beauty and interesting  explanation, one really worth thread
legendary
Activity: 996
Merit: 1013
December 12, 2015, 01:51:38 PM
#7

All good and waiting for the next installment  Smiley
sr. member
Activity: 420
Merit: 262
December 12, 2015, 01:48:17 PM
#6
Well, my layman's journey did not last very far.
(I remembered I left the oven on so had to go back home).

Apologies I will try to bring it back to layman's level of explanation on the next post. Arguing with mathtards over at cs.stackexchange can certainly fly over the heads of laymen.

Some of the first principles stuff is difficult to explain to someone who has no programming and math experience at all. But eventually I will also make summaries that can make sense to layman who don't even have those rudimentary skills.

I view myself as a layman relative to those who are (more and probably formerly) trained in the field of cryptography, such as Adam Back and Gregory Maxwell of Blockstream and the academics who are writing the white papers I will be trying to understand and explain in this thread.
legendary
Activity: 996
Merit: 1013
December 12, 2015, 01:46:29 PM
#5
Well, my layman's journey did not take very far.
(I remembered I left the oven on so had to go back home).
sr. member
Activity: 420
Merit: 262
December 12, 2015, 01:36:29 PM
#4
Edit: it is pitiful that these self-annointed "compsci" idiots have such high reputations (thus fooling the readers and spreading incorrect information as "authorities"). This is really fucking stoopid.

They mods over at cs.stackexchange deleted the entire comment trail wherein I proved that the given answer was incorrect. This must have embarrassed the shit out of them because the answer had 75 up votes from professors, PhD students, etc.. Likely the sparkling academic cathedral portion of our world is apparently getting dumber and more corrupt. Here follows the comment trail which I saved before it was deleted.


Quote from: myself
Quote from: Gilles
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: JeffE
Quote from: myself
Quote from: JeffE
Quote from: Shagun
Quote from: Carl Mummert
Quote from: Shagun
Quote from: cody
Quote from: danr
Quote from: Raphael
Quote from: Carl Mummert
Quote from: sepp2k
Quote from: gallias
Quote from: Alex ten Brink
Quote from: gallias
1. Take a random Turing machine and a random input. 2. Either the computation will go on for ever or it will stop at some point and there is a (constant) computable function describing each one of these behaviors. 3. Huh 4. Profit! I feel like, following your proof scheme, one could prove that termination is decidable. Did I miss something?

Watch out what the Halting theorem states: it says that there exists no single program that can decide whether a given program halts. You can easily make two programs such that either one computes whether a given program halts: the first always says 'it halts', the second 'it doesn't halt' - one program is always right, we just can't compute which one of them is!

That is exactly my point: how can you claim that a problem is decidable (cf. this thread's title) when you're not able to decide which one of the two algorithms is the good one?

That's not the same thing though. In the case of Alex's example neither of the algorithms will return the right result for all inputs. In the case of this question one of them will. You can claim that the problem is decidable because you know that there is an algorithm that produces the right result for all inputs. It doesn't matter whether you know which one that algorithm is.

The definition of "decidable" is that an algorithm exists to perform the task, not that we know which algorithm it is. As long as we can prove there is at least one suitable algorithm in existence then we know the problem is decidable.

@gallais Good that you raise this point; it is the pitfall most fall into. Think of it this way: The halting problem function wants to decide something for all Turing machines. Here, we only talk about one number. If you extend $f$ here to take a real number as parameter and ask the same question as for $\pi$, now that will be undecidable! The other way round, for any fixed Turing machine the halting problem may very well be decidable (and it is for many).

I am also having a bit of a problem to accept this answer. But I think my issue is that this answer is non-constructive, and I am more inclined to constructive mathematics. But I guess the normal definition of computable is computable in a classic sense, and personally such a definition feels like nonsense.

I agree with @danr: this proof is non-constructive, and it's always unsettling to have non-constructive proofs of statements about computability...

@carl : you pointed out that the definition of "decidable" is that an algorithm exists to perform the task, not that we know which algorithm it is. As long as we can prove there is at least one suitable algorithm in existence then we know the problem is decidable. But can we say that the second algorithm exists without knowing the value of N? Its one thing to say that one of the 2 algorithms is correct always and another thing to say that algorithm uses a constant which is not computable itself.

If $N$ exists, it is just a natural number, and every natural number is computable (because we can just hard-code it).

But doesn't our inability to enumerate and hence test for every natural number matter?

No. It doesn't.

@Raphael at al, the answer is incorrect. It is undecidable whether any algorithm will terminate when searching for a string of 0 values in the digits of π, because the input entropy is unbounded (bz π is irrational so the digits patterns never repeat) thus the problem is Turing-complete. Any algorithm you invent will for each n either find the string, or never find it & never terminate, and it is undecidable a priori which.

Perhaps you are conflating NP with this problem. NP is computable because we only run a deterministic verification (i.e. always terminates) on a non-deterministic algorithm, e.g. meaning that someone has told us to verify that a particular input n terminates within M number of algorithmic steps with a "yes" result, so verifying that is deterministic (can terminate after if we reach M steps) . But this problem is asking to prove that (it is decidable that) n is always computable, which is not the same as asking to verify that a particular value is computable in a defined number of steps.

That is not what "Turing-complete" means. Your argument about algorithms that search for strings in the binary expansion of π does not apply to all algorithms. In particular, none of the algorithms in my answer search for strings in the binary expansion of π. Nevertheless, exhaustive case analysis implies that exactly one of them is always correct. Nobody knows which one, and perhaps nobody can ever know which one, but that doesn't matter. A correct algorithm exists, and therefore the problem is decidable.

No, I am not confusing anything with NP. This problem is not asking us to decide "whether n is always computable". The problem is asking us to prove that the problem "Given an integer n, is there a string of n consecutive zeros in the binary expansion of π?" is computable.

The question does not ask "whether n is always computable". Rather it states "Prove f(n) is computable" (not n but f(n)). Thus the question demands that our algorithm for f(n) is computable. Even the title of the question reaffirms the requirement that the algorithm be computable, "How can it be decidable whether π has some sequence of digits?".

Turing-completeness is the requirement for unbounded input, memory, time, etc so as to be general enough to compute any computation. Since expansion of the series for π is infinite and the patterns don't repeat, then there is no algorithm that doesn't require unbounded steps in the general case of any value for input n. Thus f(n) is Turing-complete and thus is undecidable. QED. You shouldn't just foist nonsense on what is supposed to be accurate site for learning about compsci.

Again, it does not seem as though you knew what you are talking about, at all. "Thus f(n) is Turing-complete and thus is undecidable" -- that's ... not even wrong.

The implication is that f(n) requires a Turing-complete programming language because it requires unbounded recursion to expand the digits of π. Here is a canonical quote for you, "When applied to a programming language, Turing-complete means that it can fully exploit the capabilities of a Turing complete computer". You do a disservice to this site with your abundant nonsense.

Although the digits of π are computable (and only if we ask for infinite digits do we get non-termination), the question asks to compute a search for existence of a string of 0 digits, which means it undecidable whether the computation halts or not. Thus the computation is not computable, because we don't know if it is still busy or looping forever. This is just fundamental computable function theory. If you mess this up, you need to go back to Compsci 101.

"f(n) requires a Turing-complete programming language because it requires unbounded recursion to expand the digits of π" -- that's a non-sequitur, and JeffE has proven that $f$ does not have to expand any digits. "If you mess this up, you need to go back to Compsci 101" -- agreed. See you there, I'll be your TA.

The only way that the 1st case of JeffE's answer is correct, is if we can assume that every finite string of 0s is contained within the expanded digits of π, without needing to actually expand the digits. On further thought (not sleepless now), I suppose that is true because the expansion of π is unbounded thus eventually every pattern is produced i the expansion. I don't recall if that has been formally proven & n isn't finite? JeffE hedges against that with the 2nd case which says if there is a maximum N, then we enumerate to it. This hedge doesn't work because N is undecidable.

Thus either the first case is always true or his idea of avoiding actually expanding the digits is undecidable per the logic I presented. When I wrote my comments (for me after being awake for 20+ hours nonstop working) I wasn't really paying attention to the point that JeffE was proposing not to do any computation (expansion) and always return true. So in that case, can we formally prove the first case? If yes, then JeffE's answer is correct. But I think the problem is that n isn't bounded. So I don't think that can be proven.

"N is undecidable" -- that statement does not make sense, and if it did mean what you think it does, it would still not make your case -- because we don't have to determine $N$ algorithmically.

You've almost got it. I am indeed hedging my bets by presenting an infinite sequence of algorithms. I don't know which one is correct; maybe the first one, maybe the second one with N=1234567890 hardcoded in, maybe the second one with N=104378942378493027478326457832 hardcoded in, or maybe the second one with some other value of N hardcoded in. I'm quite willing to believe that no one will ever know which of these algorithms is correct. But I don't have to know which algorithm is correct. All I have to know is that my infinite collection contains a correct algorithm.

You can't know N without running an algorithm to confirm it. Sorry guys. And you can't prove the first case either, c.f. Yuval's comment that not all sequences of 0 are proven to occur in irrational numbers, besides n is also unbounded (in addition to the sequence being unbounded) which afaik mucks up any attempt to prove the assumption of the first case. Thus afaics this answer is wrong. Raphael ever faithful to your endearing personality defect, you continue ASS-U-ME you know what I am thinking.

You can't know N. That's correct. But I don't have to know N.

You need to confirm N because as Yuval pointed out, you can't assume any finite sequence of 0s will exist in a finite sequence of the expansion

No, I don't need to confirm N. I don't need to confirm that any particular finite sequence occurs in π. I need to know that either all strings of 0's appear in π or there is some longest string of 0's in π. If the first alternative is true, the first algorithm is correct. If the second alternative is true, then for some value of N the second algorithm is correct. Either way, at least one of the algorithms is correct.

But you can't compute the second case without knowing the value of N for which n > N should return 0. f(n) must always return a correct value for any n. Without knowing N, f(n) in second case can't return the correct value. I can't believe you are making me write this. What the heck could I possibly not be seeing? It seems so obvious to me.

You can algorithmically compute the correct return value for the second case and be sure to terminate for any finite value of n. But for the unbounded case of n, then the second case is undecidable (maximum N can't be known without confirming it so you don't know whether to return 0 or 1), and the first case is unproven. What am I missing? Your logic is based on not needing to use an algorithm in the unbounded case, but Seems the error in your logic was forgetting that the return value of the second case matters in the unbounded case. 75 wrong up votes. Oh my.

I have moved over to chat, but please note I am recording this discussion offsite in a popular forum, bcz I view chat as a closed source and I believe i in open source. Also I will post any final summary back to the comments on the page from whence we originated.

The error is precisely that you assume your proposed second case of f(n) will only be necessary for some finite N and that it can be computed because N is a maximum finite value. But if it is true that N is finite, then your first proposed case for f(n) must always be invoked in the unbounded case. So the precise error is it is undecidable which case should be invoked, because your logic requires that 2nd case be invoked when the first case would fail. That proves your answer is flawed.

Let me rephrase to make it more concrete that the answer is broken. We must know N, because above values of N we need to return either 0 from the second case of the proposed f(n) or 1 from the first case of the proposed f(n). Because your logic assumes the second case of f(n) will be invoked when the first case is inapplicable. But this assumes the first case will always be applicable in the unbounded case, otherwise we must assume that the second case will be invoked with an unbounded value of n. Since we can't prove that the first case is always applicable, then the second case is undecidable. QED. I could probably improve the proof if I wanted to spend more time on it, but this was an enormaous waste of my precious time already. Adios to this site!!

Comments are not for extended discussion; this conversation has been moved to chat.

You deleted the entire comment trail because I showed that everyone here was wrong! I will be publishing the entire comment trail and pointing out how corrupt this cs.stackexchange is!
sr. member
Activity: 420
Merit: 262
December 11, 2015, 07:26:20 PM
#3
Pertaining to the Computational Complexity Classes, the negative votes one has to endure when they are the posting insight that is too conceptually unifying and the readers are too thick to grasp that reality.

I've had other examples, but at least this one has come up from a -9 vote to a -2 lately (as more readers start to grasp I was correct all along).

Yet I have another one at -6 which I am very confident I am correct and the voters are idiots.

Edit: it is pitiful that these self-annointed "compsci" idiots have such high reputations (thus fooling the readers and spreading incorrect information as "authorities"). This is really fucking stoopid.


Edit#2: I need to start copying all the comments over here because the one bezerk moderator at cs.stackexchange is deleting my information/work again!

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104814_50623

Quote from: myself
Quote from: David Richerby
Quote from: myself
Quote from: David Richerby
That isn't what nondeterminism means in computer science. Nondeterministic algorithms are not "unpredictable."

Nondeterminism is precisely unbounded entropy in computer science. Go review the Halting problem, undecideability, and Turing-completeness. The unpredictability in nondeterministic algorithms is due to not being able predict if the algorithm will terminate. Period. All the downvoters are ignorant. Whereas, randomization comes from entropy that is orthogonal to the input variables, i.e. under control of an orthogonal source of entropy. I do hope people realize that entropy is disorder and all randomness is due to entropy. Geez. Even /dev/urandom is seeded with entropy.

You have completely misunderstood what nondeterminism means in computer science. It has nothing whatsoever to do with entropy. It does not mean what you think it means: that is why you think all the other answers are wrong. Perhaps the name was badly chosen but that's beside the point. It has a particular meaning in computer science, which is different from the meaning it has in other sciences. You are trying to use the wrong definition, and that is why everything seems completely wrong to you.

You can't seem to grasp that I am unifying the definition compatible to the way it is used in computer science. The use of the term nondeterminism when talking about computational complexity theory (e.g. NP vs. RP vs. P) is clearly all about the entropy as I have explained it in my answer. That finite automa decided to abuse the one term "nondeterministic", is just an aberration. The existence of the three terms in NP, RP, and P is strongly canonical. Also canonical bcz consistent with the universe of definitions for nondeterminism. Sorry dude, you are wrong.



Clearly NFA is only congruent with the rest of the universe of definitions for nondeterminism (including the more canonical computer science definition in complexity theory for NP, RP, P) if we realize that the NF go together. Thus NFA is not referring to the subject of this question which is nondeterminism, but rather to nondeterminism in the presence of finite entropy, which is another conceptual entity entirely different from nondeterminism. The mistake in your logic is assuming the N in NFA is distinct from the F, which it obviously isn't! Cripes.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104790_50623

Quote from: myself
I am appalled because other than Alen tin Brink's answer, all the other answers are highly incorrect. And yet I observe readers upvoting those highly incorrect answers and downvoting mine. It is bewildering and flabberghast. It shows that this topic is highly misunderstood. Computational complexity issues are a very high IQ abstraction apparently. Thus apparently only a small percent of readers do really understand these issues. Otherwise explain to me how you think I am incorrect.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104857_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104859_6061

Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
Quote from: myself
Quote from: Raphael
I[your] answer has nothing to with how non-determinism is defined in automata resp. computability theory. You should stop flaming around and get a textbook. If you don't understand the other answers, I don't think we can help you in a comment.

Finite automata is not relevant to the question of non-determinism because finite automata has a finite entropy! That is fundamental. The use of the word "non-determinism" in finite automata is a misnomer and a mirage. You can't just take a few researchers error in appropriate naming and pretend to foist that on correct logic. Also I understand the other answers to the extent I have commented on them, e.g. I understand that finite state machines can't represent an unbounded entropy nor generally run Turing-complete programs.

Quote from: myself
"The use of the term nondeterminism when talking about computational complexity theory [...] is clearly all about the entropy"

-- no, it's not.

Appeal to authority is not argument. Please provide your reasoning or give up. Perhaps these quotes from Minsky can help you gain clarity

I don't need any reasoning. I am only appealing to the formal definition of non-determinism in automata/computability theory. Which, I daresay, you have either never read or not understood.

Formal definition of non-deterministic finite automa (NFA) is finite input entropy (data: the 5-tuple). Thus every NFA can run on a deterministic Turing machine, i.e. doesn't require a nondeterministic Turing-complete machine. Thus NFAs are not in the class of nondeterministic problems. The notion of "nondeterminism" in NFA is that its determinism (while clearly present since every NFA can be converted to a DFA) is not explicitly expanded — not the same as nondeterminism of computation

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104860_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104861_6061

Quote from: myself
As I explained already, the notion of non-determinism in NFAs couple the non-deterministic with the finite entropy. Thus the non-determinism is a local concept of not expanding the determinism as a form of compression or convenience, thus we don't say NFAs are non-deterministic, rather they possess appearance of randomness to an oracle unwilling to compute the deterministic expansion. But it is all a mirage because it call be expanded deterministically bcz the entropy is not unbounded, i.e. finite. If you don't have an argument, I have no more time to waste on your nonsense.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104864_50623
http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104865_6061

Quote from: myself
The claimed "non-determinism" in NFAs is really randomness is sense of my definition of the distinction between randomness & nondeterminism. My definition is that randomness is where some of the entropy that is not under the control, knowledge (, or desired non-explicit expansion in the case of a NFA) of the input to the program or function. Whereas, true nondeterminism is the inability to know the entropy in any case, because it is unbounded. This is precisely what distinguished randomized from nondeterminism. So NFA should be an example of the former, not the latter as you claimed.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104858_6061

Quote from: myself
Why are my comments disappearing from your answer? Are you appealing to moderators to delete information?

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104868_50623

Quote from: myself
You all will eventually realize I am correct. There is no other formulation of the information set which is harmonious and congruent. My answer is correct as well as my comments. I have no idea how long it will take you to realize the truth. That isn't my job. It is rather bewildering that cliques of grown men can go off on political tirades of up voting each other's comments, downvoting correct answers, and flagging my comments for deletion by moderators. I am ashamed of you. I am now duplicating all these comments on a public forum to document this abuse.

http://cs.stackexchange.com/questions/367/how-can-it-be-decidable-whether-pi-has-some-sequence-of-digits?noredirect=1#comment9096_367
http://cs.stackexchange.com/questions/367/how-can-it-be-decidable-whether-pi-has-some-sequence-of-digits#comment104869_367

Quote from: myself
Quote from: Raphael
Quote from: myself
@Raphael, for the third time, since you've had the moderator delete this comment on the prior two occasions, the notation I have normally seen in the cryptography papers I read is {0}^n which is less ambiguous bcz {} is the notation for set. I am now duplicating these comments where I have 1000s of readers in order to document moderators abusing their authority and deleting information. Ah you are German, that entirely explains the control freak attitude. The link is from Eric S. Raymond the 150+ IQ genius who coined the moniker "open source".

I am a moderator. I removed your comments because they are redundant/obsolete, which is completely in line with our policy. Stop reposting them. (Did not you tell me that argument by authority is no argument at all?)

How is a comment about the notation you are using redundant? I am pointing out that you used a more ambiguous notation than what I've seen used. Why do you feel not only compelled but entitled to delete that comment? And you've removed it again for the 3rd time.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/6061?noredirect=1#comment104883_6061

Quote from: myself
Quote from: Raphael
I'm removing your comments because they are obsolete (literal copies from the comments below your own answer). They are also not information but misleading tirades. If you want to tell the world how wrong every book and article written about this ever is, start a blog. This is not the place.

So you are the mod here (oh lord help us) and thus abusing your position by being the biased judge in your own case? A highly ethical person would recluse themselves from a conflict of interest. Your characterization of the quality of my comments is laughable. You don't even seem to comprehend what nondeterminism is and yet you are a PhD student in Algorithms & Complexity group at University of Kaiserslautern, Germany. An embarrassment to your university. Keep up the ad hominen against me & I will return it in kind.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104882_50623

Quote from: myself
Quote from: Raphael
We can agree on that: please stop trying to "teach" us, it's not helping anybody.

And one piece of advice: if everybody else -- including professors, doctors and graduates of (theoretical) computer science -- tell you that you are wrong, you may want to stop and consider if they are right. As a help for this reflection process, have a look at this learner classification by Wolcott/Lynch (p36). In particular, note the commen weaknesses of the "Biased Jumper".

You continue to appeal to authority. Also you don't know my qualifications. I am still waiting for you to present a coherent logic. I have presented all the coherent logic. Allow the open source peer review from many academics over a sufficient sample, then you will see this initial barrage of nonsense from your private clique here is overturned by reason. P.S. I am very familiar with the concept of the Dunning-Kruger syndrome. I do not like your "I know it all" and very inflexible attitude to learning new concepts. You've been condescending to others who asked reasonable questions.

http://cs.stackexchange.com/questions/5008/differences-and-relationships-between-randomized-and-nondeterministic-algorithms/50623?noredirect=1#comment104872_6079

Quote from: myself
Quote from: myself
-1 Randomness can be deterministic, if it inputs a seed of a bounded number of bits in possibilities (bounded entropy) instead of accessing new seeds an unbounded number of occurrences. Nondeterministic is unbounded possibilities (entropy). Wikipedia defines nondeterministic algorithms to be those which can't be proven to terminate. Thus Wikipedia is not presenting two different definitions. Thus I am downvoting.

Quote from: reinierpost
There is nothing 'unreal' about running a nondeterministic algorithm, it just means that while running, choices need to be made for which the outcome isn't determined by the algorithm. The only difference between 1 and 2 is that in 1 you haven't said when the algorithm is considered to "succeed", whereas in 2 the algorithm must be of the kind that always says "yes" or "no" at the end of a run and you have a decision problem and you define the algorithm's answer to the problem as "yes" if and only if any one of its possible runs answers "yes". So 2 is just a special case of 1.

Everyone is conflating the difference between randomized and nondeterministic. This causes your comment to be muddled. The algorithm responds to the interaction of the input (variable) entropy and its source code (invariant) internal entropy. Nondeterminism is unbounded entropy. Invariant entropy can even be internally unbounded such as expanding the digits of π. Randomized is some of the entropy is not coupled to the input as defined (i.e. it may be coming from a system call to /dev/random, or simulated randomness e.g. NFA or a PRNG).
sr. member
Activity: 420
Merit: 262
December 11, 2015, 05:26:18 AM
#2
Computational Complexity Classes: Overview

An algorithm (or program) is a set of logical steps that transform some state (i.e. variables). If the algorithm is a function (i.e. not a procedure), then it will read the input state and write the output state, thus being a transformation from the input state to the output state. (I'm referring to a pure function which does not read nor write any global variables).

A computational complexity class defines boundaries on the computation allowed in an algorithm. Some examples of boundaries are:

1.
Result type:
sr. member
Activity: 420
Merit: 262
December 11, 2015, 12:20:52 AM
#1
Join me on a technical journey to try to understand Zerocash (not Zerocoin which is a separate project although Zerocash does name their anonymous units zerocoins), starting from first principles.

I am curious and want to broaden and deepen my understanding of the technologies in Zerocash. I want to share my learning experience and perhaps also obtain collaboration with the community in learning and elucidating in layman's terms.

Here is a high-level enumeration of technologies we need to understand in reverse order of what we need to understand first, i.e. where the former requires understanding of the latter:

  • Zerocash
  • zk-SNARKS
  • quadratic arithematic programs (QAPs)[1]
  • quadratic span programs (QSPs)[1]
  • probabilistically checked proofs (PCPs), as an alternative to QAPs or QSPs
  • span programs
  • computational complexity classes, e.g. P, NC, NP, E, #P, ⊕P

I recently realized that Zerocash is the only way to achieve anonymity without the (virtually impossible to guarantee) requirement to obscure our IP address.

This is a technical education and discussion of technology thread, so moderation is present to be able to keep it focused on such (because I understand there are political vested interests regarding anonymity technologies in the Altcoin arena).

P.S. sometimes I need a distraction from pure coding, and learning about theories and ideas is an outlet/diversion for me to relieve the stress or mental monotony of the coding process as compared to the "grand ideas" mental process.

[1]http://www.iacr.org/archive/eurocrypt2013/78810623/78810623.pdf
https://eprint.iacr.org/2012/215.pdf
Pages:
Jump to:
© 2020, Bitcointalksearch.org