1. You wrote, "2To save space, SUM j=0 to outputs(∆j) is revealed as an unsigned integer at the transaction level.". Wouldn't such a sum allow some of the operands to be negative and others to be positive offseting. I presume you reveal the ∆ to prove the sum of the squares isn't negative, but appears that optimization in footnote 2 is flawed.
Yeah. Footnote removed.
2. There appears to be a typo of an extra superfluous (or missing redundant) parenthesis, "fuzzbits ≈ (curvebits − reservedbits)/2) − valuebits".
Removed.
3. It appears you avoided factoring fuzzValue into a sum of three squares by inventing a heuristic (to factor into a sum of two squares) to save "50% storage and computation requirements", but at the cost of reducing the security of the value blinding by 1/3 to 1/2 of the bits. Why is this a wise tradeoff?
Finding the actual 3 squares for a specific random integer can get too expensive at these bit levels. Some (maybe all) algorithms require factoring (or similar expense), and it is not convenient for a user to have to factor 252-bit numbers to generate a transaction output (regardless of a coin network's scalability). I will do more background reading to see if there is a better alternative.
Worse than that, even if 3 exact squares are found, one of the three square roots is likely to turn out to be very small (fewer than 32 bits). None of the roots are blinded for the proofs to work; they have to be the actual commitments. Then either the prover has to generate a new random number and run large-integer factoring again (and again...), or the attacker will know that one of the 3 committed values is cheap to break. Then the smallest square might as well be in plaintext and save the 50% overhead anyway. To solve that problem, [38] suggests using a probabilistic ECC system instead of a deterministic one, which doubles the cost of all commitments (though there are probably some new methods to push some of the cost back down), introduces the need for additional explicit equality proofs and other complexities.
How do you know this heuristic will terminate with less computational cost than factoring?
There's a good chance that the first random number will just work.
If not, all the heuristic does is: two square roots, two squares, one difference, one addition and two comparisons. Plain ultra-fast ops to repeat. Nothing like factoring, and much less work than one ECC multiplication.
4. As another consequence of discarding factoring, how do you know that providing one equation of one unknown will not reveal fuzzValue? Does that equation have provably many solutions?
It does have many solutions, as there are many more integers close (up to ∆) to sums of squares than there are integers that are actually sums of squares. There are also many more sums of squares than there are squares. Easy way to think about it is to set ∆=1, and see how sums of squares can work with it (4+9+1=14, 9+16+1=26, 4+16+1=21, 16+25+1=42, 4+25+1=30, ...), and that this is only bounded by the modulus of the system.
A proof of exactly how well [sum_of_two_squares, sum_of_two_squares + n] covers the integers would be good to include in the paper (it might also help to calibrate a better upper limit for ∆), but difficult to come up with.
5. Since the security of x2 is lowered by 1/3 bits, how do you know that finding this additional variable thus providing two equations with two unknowns will not catastrophically weaken the security due to some clever algorithm such as for solutions of systems of non-linear equations? Your cited resource [38] pointed out that the prior attempts to be clever to avoid factoring the sum of three (or four) squares lead to a solution that was attackable from both the hardness of the discrete logarithm and the hardness of integer factoring simultaneously, thus required higher bit widths to compensate which ameliorated the efficiency of the algorithm. Isn't the burden of proof on your whitepaper to explain why your cleverness has not introduced analogous vulnerability. I am just really doubting this attempt to avoid factoring to 3 squares. I would naively tend to trust the guy who wrote that cited resource [38] given the breath of knowledge and peer review that apparently went into it.
The goal in those sources is to provide proof for a very specific integer, they have no choice to opt out of the high level problem. For application in transactions with fuzz blinding, CCT is not constrained in that way, and can simply pick a different random number at the output level. This is actually what is done at low level in many NIZKP of small magnitude. It is done in the NIZKP of source [38] itself, in section 4.22.2, heading "ZK-proof of membership in a much-widened interval." If the random w doesn't work, the algorithm just tries another w until it succeeds. This does not necessarily weaken the system, although as you mentioned, the burden of proof is on the whitepaper. I think solving your point 4 is sufficient.
Also as per my response to point 3, in a deterministic ECC system, using the actual three squares can reveal the smallest one of them.