I thought maybe I was going to set this aside and go back to programming, but something that might be important has caught my attention and I want to settle this issue before putting this thread on my back burner.
There is an incongruence in my explanation and conceptualization the NP section of the
my incomplete second installment (at least what is there as of now until I edit again to finish it), and it was
making me confused. This incongruence might be an insight into solving the $1 million bounty for proving the relationship between P and NP. Probably not (given many smarter than me have thought about this problem extensively), but I want to correct my understanding or correct complexity theory, which ever is correct.
The NP class includes problems such as asking whether a specific input threshold distance has a shorter path distance in the "traveling salesman problem". Computing just the NP-hard (not NP) "traveling salesman problem" with exhaustively precise optimum
global search is deterministic in EXP (i.e.
intractable exponential time, actually factorial
N! for naive bruteforce search but improved algorithms are EXP currently best
O(2N), where
N is the number of cities input to the computation). Deterministic because it is guaranteed to find the optimum and terminate, but the problem is the time it takes is asymptotically intractable and can't be realistically computed for very large
N. There are other estimation algorithms for this problem that can produce less than perfectly optimum answers in P (i.e.
tractable polynomial time). These estimation algorithms are nondeterministic because the algorithms can continue searching forever looking for an improved result to the result computed so far. The nondeterminism is whether these estimation algorithms will ever find an improved result or not (relative to current best result found), so it is undecidable when to terminate the algorithm. Converting the NP-hard problem to a NP problem by asking whether the result is more optimum than some input distance threshold is analogous to asking for a consecutive number of 0 digits in the expansion of π, in that it is undecidable (i.e. not knowable a priori) how many steps the algorithm needs to execute to find the requested threshold or if the threshold will ever be found, so both problems are in the NP complexity class because if threshold result is found then it can be verified in P time (I need to explain more clearly that NP only requires verification in P). My conceptualization is the global search of the perfect deterministic algorithm has been converted to a localized perspective that is sampled over time for the estimation nondeterministic algorithm, and thus aliasing error results. By localization I mean observe how the various estimation algorithms sample some local heuristics such as number of times a city has been already visited, instead of a deterministic globalized search. Thus the conversion from the perfectly optimized
total/global containment rule (c.f.
my upthread discussion about that) which is
finite in
N thus possessing a
bounded, finite entropy (yet requires intractable EXP time), is converted to a
localized containment rule which thus opens the entropy to
unbounded recursion (analogously as the expansion of the digits of π is unbounded recursion) thus nondeterministic (yet requires only tractable P time). I should explain in the future the congruence of integer factorization to this model of explanation/conceptualization I have employed.
So conceptually I do understand the NP computational complexity. It
is proven that global optimization NP problems are NP-complete because the verification of such a nondeterministic global optimization problem is always reducible to a deterministic polynomial computation. Afaics an information-theoretic Pedersen commitment appears to satisfy the requirement of NP in that it can be verified with an algorithm in P. And information-theoretic security is predicated on the entropy of the solution space to be
unbounded (i.e. every inversion of the commitment solution is equiprobable), so the valid solution is the one that is revealed for verification.
Even one of the prominents in the field pointed out that any meaningful advance would provide paradigmatic shifts of insight into complexity theory otherwise it wouldn't be interesting and useful
So where I am driving with this line of conceptualization is that
the "traveling salesman problem" is not NP-hard as claimed, because its perfectly optimum solution is computable in EXP but the perfectly optimum solution for information-theoretic inversion is undecidable!! The definition of NP-hard is that these problems should be at least as hard to compute as every NP-complete problem. The error in complexity theory seems to be a conflation between the hardness of finding a result versus the hardness of verifying a result. In other words,
there is no relationship between P and NP! Because I first conceptualized nondeterminism as unbounded entropy, this enabled me to see this incongruence (that is assuming I am not missing something and incorrect in my conceptualization).
The statements verifiable in polynomial time by a deterministic Turing machine and solvable in polynomial time by a non-deterministic Turing machine are totally equivalent, and the proof can be found in many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3.
7.3 The Class NP
---8<---
Theorem 7.17
---8<---
We show how to convert a polynomial time verifier to an equivalent polynomial time NTM [nondeterministic Turing machine] and vice versa. The NTM simulates the verifier by guessing the certificate.
Thus as we can see that what is proved is insufficient—if it is correct that verifying a Pedersen commitment is in NP—because an information-theoretic Pedersen commitment is always undecidable thus every guess is equiprobable to be correct so there are no solutions and thus the verification can't be converted to a guess on an NTM.
So let's assume that Pedersen commitments are not supposed to be in NP. How could we change the definition of NP so that it would exclude Pedersen commitments. We would have to require that every verification can be converted to a guess on a NTM in order to exclude Pedersen commitments. In that case, NP is not complete for polynomial time, deterministic verification. So what is this pointing to as a deeper insight into computational complexity theory?
The unifying concept (generative essence) appears to be that NP arises not from problems being hard, but because they are undecidable due to unbounded entropy. Thus leads to a conjecture that intractable problems (at least given the machine model available) are equivalent to unbounded entropy and thus can't be in P. Thus
Donald Knuth's intuition expecting a bounded solution would be incorrect.
This would tell us something very profound about our universe. I speculate rather brashly that it ties into the Second Law of Thermodynamics that the entropy of the universe is always trending to maximum. So the edge of our universe need only be intractable to be unbounded. The implication is we can't contain infinity within itself and be total. Just as infinite speed-of-light would collapse the past, present, and future into an infinitesimal point in spacetime, if intractable problems were not unbounded entropy then we could compute the edge of the universe and thus compute nothing.
Now to turn that into some kind of defensible proof, even if it is correct which is not certain yet. I look forward to replies educating me on my myopia.
Edit: I realized my mistake. The Pedersen commitment is
not binding. So this means the computation of a guess is deterministic because any guess always succeeds. And thus Pedersen commitments are actually in P not NP. I still believe there is some insight to gleem from the observation (conjecture or proven?) that converting deterministic intractable problems into tractable problems makes them nondeterministic and unbounded in entropy, but I am not seeing yet how to frame it in a way that proves anything. Perhaps proving that the conversion must be unbounded would prove P ≠ NP. I need to contemplate that.
P.S. I think I roughly conceptualize how Zerocash works. The program to prove and verify is converted to a boolean circuit and encoded into a quadratic form of a span program and the pairings (that originate from the insight of the
⊕P complexity class due to the completeness of the problem of computing the permanent for the binary case) are obfuscated in ECC fields, so that the computation is homomorphic and the proof only validates if the computation matches the public key. In other words, it is impossible to factor the encoded pairings such that one could create a public key for a program and then fake that they had run the program when they really didn't. I will explain this more clearly in the future when I have more time to study. The ECC math is beyond what I was educated to know, so it will take me some time to digest it.