Pages:
Author

Topic: bitcoins with homomorphic value (validatable but encrypted) - page 2. (Read 19890 times)

sr. member
Activity: 404
Merit: 362
in bitcoin we trust
Recap at this point the proof message is { a,H(a'),r } or {a,a',r} the other 3 values are r' (copied from r or vice-versa), h is computed from sub proof (v_i=0 proof/forgery) verification relation, and cross checked with sub proof 2 (v_i=1 proof/forgery) verification relation.  c is computed as H(a,h_i) and c' is from the KDF seeded with the non dependent values.

There is one more promising optimization which might take it to 3+2m, I need to verify for feasibility of dependencies.

OK I guess I was thinking about it wrong last night, dependencies dont matter as a' is public so its simple: compute a' from the second verification relation (analogous to computing public key, but this time we know the public key h_i from computing the first verification relation).  Second verification relation a'=?h^r*(h_i/g)^-c".  Now send a new value t=H(a_0',...,a_50').  So thats 3+2m which is 1+(3+20*2+3+3*2)*32=1665 bytes for the m=20,e=3,and a one byte signed offset.  Or 3+2m is 3+51*2=3360byte for full precision e=0, or perhaps 3+(3+30*2)*32 = 2016 byte with a clear text 21-bit exponent, or even (3+27*2)*32=1824byte. 

Coins with different public exponents can still be publicly audited (just raise the smaller coin to power 2^abs(e1-e2) which is multiply by 2^abs(e1-e2) in EC form). The exponent wont really need to move except over the multi-year time-frame as bitcoins get smaller.  27 bits is lots of precision giving a decimal precision of 8 digits eg 1c on $1mil or $1 on $100mil etc.  Just about plausible though maybe forcing mix of different precision coins reducing privacy could be 20bits and cleartext exponent at (4+20*2)*32 = 1408bytes.

I think 27-bits is a nice precision balance without using the complexity of the encrypted exponent so I'll focus on clear text exponent and 27-bit precision, though the encrypted exponent is a valid optimization at implementation level.

Encrypted value coins (of both mantissa only or mantissa and encrypted exponent form) are encrypted UTXO compactable after spends which is bitcoins UTXO compaction model.

Clear text fees are still validatable: just publish f=fee and then include g^f*h^0=g^f with the other validation.  Clear text value payments are similarly validatable: just publish v and compute g^v*h^x.  They take the space of one ECS (EC-Schnorr) signature, same as DSA, no range proof required.  What you sign is proof of knowledge of discrete log of h^x.

Adam
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
I was thinking of the algorithm due to Berry Schoenmakers (reference is unpublished) but Brands describes on p129 of chapter 3.

So the only remaining flexibility with that algorithm is to minimize bitcoin precision its still currently say 40 bits with bitcoin price up to $500 and precision down to 1c US.  A bit close to the limit if the price rises.  Bitcoin actually uses 51 bits (1c precision up to $1m per coin).  Anyway you need to run that proof which involves 40 or 51 subproofs [...].  I cant see that working out at under 14kB to 18kB, probably a lot worse.

An update on this, actual numbers for the size and its a bit better than thought.  This is from the above Brands reference, algorithm due to Schoenmakers.  Lets call bitcoins precision m=51.  x is a secret value and p commitment to the coins value (like pederson commitment) p = g^v*h^x.  g and h are bases such that the discrete log of h wrt g is unknown (log_g( h ) = y aka g^y = h , y is unknown).  I use exponentiation notation because I prefer it but the same works in EC.

v=bitcoin value, bits of v are v0 ... v50 (ie so that v=sum 0..50(2^i*v_i) with v0 is the LSB.

Choose m=51 random values in Zn (where n is the order of the group) call x0, ..., x50, and set x=sum 0..50(2^i*x_i) mod n.  Primary commitment is y=g^v*h^x.

Now do m=51 auxiliary commitments i=0..50 calculate h_i = g^v_i*h^x_i.

Prove for each auxiliary commitment that v_i = 0 or 1 as follows.  If v_i = 0 compute v_i=0 proof and forge v_i=1 proof, in such a way that you can only forge sub-proof, if v_i = 1 compute v_i=1 proof and forge v_i=0 proof.

If v_i=0:

prove v_i=0: k=random, a=h^k,c=H(a,h_i), c'=random,r=k+c'*x_i mod n.
forge v_i=1: r'=random, c"=c-c',a'=h^r'*(h_i/g)^-c",

else if v_i=1:

forge v_i=0: c'=random,r=random,a=h^r*h_i^-c',c=H(a,h_i)
prove v_i=1: k=random,a'=h^k,c"=c-c',r'=k'+c"x_i

sub-proof message is: (h_i,a,c',r,a',r") 6x 32 byte values.

sub-proof verification is the same for v_i = 0 or v_i = 1 (as the verifier must not be able to tell the value of v_i).

check a=?h^r*h_i^-c' and a'=?h^r'*(h_i/g)^-c"

Finally the verifier also checks y =? prod 0..50 h_i^(2^i) mod n.

This works because x=sum 2^i*x_i and v=sum 2^i*v_i.

So adding up!  Each value is 32 bytes (for 256-bit values/compressed points and 128-bits of security margin) where m is the mantissa in bits we have presuming for now g, and h are shared parameters.  We have h, plus m times (h_i,a,c',r,a',r") so that is 1+6m=307*32=9824 bytes.  The other values c=H(a,h_i), c"=c-c are computed.

However I spent the weekend on size optimization:

1. note either r'=random (v_i = 0 case) or r=random (v_i = 1 case) and both are public values, so instead of choosing them randomly we can reuse r'=r (v_i=0 case) or r'=r (v_i=1 case). We cant use a seed and KDF to create r' or r respectively because that would reveal whether v_i = 0 or 1.  So then we get 1+5m=8192 bytes.  

2. same argument for c' being random and public, in this case we can use a public seed of all non-dependent values, so then we have 1+4m=6560bytes.

3. we can send H(a) and H(a') instead (H truncated to 128-bits according to Schnorr though Brands cautions that one must be careful with this depending on the complexity of the formula proven, so some more things have to be checked) and modify the sub-proof verification step to check H(a)=?H(h^r*h_i^-c') and H(a')=?H(h^r*(h_i/g)^-c").  So then 1+3m=4928bytes

4. instead of H(a) from 3 send a (cant do both H(a) and H(a') as we need a) we can compute the public key h_i from the extended schnorr signature/representation proof (analogous to the way you can compute the public key from a DSA signature).  As we're verifying h^r=?a*h_i^-c', we can rearrange and compute h_i=(a*h^-r)^{c'^-1}.  We need to verify h_i values but we can do that with one hash h'=H(p,h_0,...,h_50).  So then we have 1.5+2.5m=4128bytes (or 2+3m=4960 bytes with out optimization 3).


Recap at this point the proof message is { a,H(a'),r } or {a,a',r} the other 3 values are r' (copied from r or vice-versa), h is computed from sub proof (v_i=0 proof/forgery) verification relation, and cross checked with sub proof 2 (v_i=1 proof/forgery) verification relation.  c is computed as H(a,h_i) and c' is from the KDF seeded with the non dependent values.


There is one more promising optimization which might take it to 3+2m, I need to verify for feasibility of dependencies.


Also note one could reduce m eg to 24 bits, and include an exponent e also so that bitcoin value is m^(2^e).  24 bits is enough precision for 1c precision for up to $1.6m.  The exponent can be public (and signed as an auxiliary message to the proof).  Some ambiguity about the range of the value can be achieved by using a unencrypted mantissa offset o=random(-4,4) and from the original mantissa instead encoding: m'=m*2, e'=e+o so the coin value is the same m'^(2^e')==m^(2^e).

The exponent itself could even be homomorphically encrypted and proven in a range and compared for equality using an equality proof proving the e1 and e2 values are the same, or differ by the unencrypted offset q1=g^e1*h^z1 and q2=g^e2*h^z2.  To compare offset do a range proof on e1 and e2 as above, and the encrypted range could be eg o/3 (in multiples of 8 bits) and the remaining bits from the public offset.  To verify the offset you'd prove q1 has same e value as q2/g^o.  As we only need to cover 25 bits, a 3-bit range proof for the exponent is enough which is quite small.  recalling (2+3m)*32=352bytes.

You could probably set m=20 and e=31/8=4 for 2+3m+2+3e=2432 bytes.


Finally some interesting implications arising: you can add coins without owning them (owning them is knowing the private keys xi.)  You can pay someone by adding your coin to their coin and broadcasting (or sending privately) the private key encrypted with their public key.  Anyone can verify publicly that the coin adds up.  You cant oveflow mod n to attacks because there are by definition less than 21mil BTC.  You can randomize a coin by adding a 0-value like g^0*h^x.  How you add is multiply the coins together (or point addition in EC terminology).  The new private key is the addition of the old private key and the new one.  No one not involved can tell if that was a 0-valued red-herring or 1c payment or $100.  The network itself could pre-emptively add the coin without the recipient doing anything and compact them if the encryption is also a proof.  (ie I try to add a new value v to your coin g^v*h^x, which comes with a range proof proving v is positive  and doesnt wrap  mod n and to do that I encrypt with your public encryption key pk (which was signed by the person who sent you the input) the key x and I also prove that the person with the private key corresponding to pk can decrypt to the same value as x (that anyone can verify) proving equality of discrete log.  If you had to you could probably reuse the coin public key for encryption (like sharing a schnorr signature public key and an elgamal public key, need to be careful but there is some analysis).

For inputs, you only need to refer to the outputs (and prove knowledge of the private key) without having to do a new range proof, because by definition all bitcoins in existence cant wrap if added up.  You do need fresh range-proofs when you split a coin (to spend part of it).  You can split the private key too (x = x1+x2).

To create a coin receiving address you just produce a zero-valued coin and prove knowledge of discrete log wrt h (because g^0*h^x = h^x).  Then people can send you coins without being able to spend them.

For mining you prove knowledge of representation g^25*h^x which is efficient (no range proof because 25 is public).  Then you split the coin with range proofs when you spend (or when the mining pool divides up the reward to the pool workers).

You maybe no longer need an address (hash of public key) that signs, because the coin private key is a signature key, but you do need an encryption public key.

A type of taint mitigation is  spending to some statistical number of addresses a 0 or low value spend together with your real spend.

I really need to implement the crypto as a library or stand-alone demo program and double check somethings are not confused but this appears quite practical.

The bloat caused may not be so much because there is some privacy gain that may reduce incentive to have multiple addresses or use mixes.

Adam
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
Correct me if I am wrong here, please.
You are wrong. But you're asking in the wrong place— go see the nice article on wikipedia.
I did see the article on wikipedia, and it says that full homomorphic encryption is a scientists' wet dream and it has never been achieved.

Well technically it was achieved by Gentry and a few related improvements on that, however those schemes are extremely far from practical.
So for practical usage it has not yet been really achieved. That's what i wanted to say.

Anyway, it would be extremely cool to have homomorphic encryption as it would enable ultimate blockchain compression (there was a topic here claiming that).
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
Correct me if I am wrong here, please.
You are wrong. But you're asking in the wrong place— go see the nice article on wikipedia.
I did see the article on wikipedia, and it says that full homomorphic encryption is a scientists' wet dream and it has never been achieved.

Well technically it was achieved by Gentry and a few related improvements on that, however those schemes are extremely far from practical.  Ie megabyte keys, bajillion machine cycles per encrypted operation etc.  Still it was a nice result that proves it is actually possible which was uncertain in the 30+ years since it was posed as a question by Rivest et al.  They even somewhat recently have a library so one could download it and try out how expensive it is.

Anyway for homomorphically encrypted coin values you dont need fully homomorphic (ie you dont need both additively homomorphic & multiplicatively homomorphic, additive only is enough).  In fact thats even easy and there are several additively homomorphic encryption systems like elgamal and paillier.  The hard part is efficiently preventing the user adding n the order of the group to their balance for massive scale fraud.

Adam
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
Correct me if I am wrong here, please.
You are wrong. But you're asking in the wrong place— go see the nice article on wikipedia.
I did see the article on wikipedia, and it says that full homomorphic encryption is a scientists' wet dream and it has never been achieved.
staff
Activity: 4284
Merit: 8808
Correct me if I am wrong here, please.
You are wrong. But you're asking in the wrong place— go see the nice article on wikipedia.
legendary
Activity: 1470
Merit: 1006
Bringing Legendary Har® to you since 1952
The starting point is it is known in the literature that you can do additively homomorphic encryption,
Wait a moment.

Wasn't homomorphic encryption a theoretical academic thing that has not been proved to even exist (or be possible to perform) yet ?

Correct me if I am wrong here, please.
full member
Activity: 126
Merit: 110
Andrew Miller
There's a section in this bibliography about ZK range proofs, but I'm not familiar with any of them. Maybe there's something useful in here.
http://www.cs.ut.ee/~lipmaa/crypto/link/zeroknowledge/pok.php
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
I don't think anyone knows how to do an efficient ZK proof for things like less-than, or bit extraction, or mod x.

I believe there is a description [...] from chapter 3 of Brands PhD thesis/MIT press book "rethinking public key infrastructures", which is available for free download.  Its a component of ZK range proofs or ZK less than.

http://www.credentica.com/the_mit_pressbook.html

Uh oh I think I made a mistake in reference to the t parameter - its the precision of the range, not the number of most significant bits.  Thats not as nice, but still perhaps just about practical.  (I was incorrectly thinking by changing the value range to 0..2^254-1 and using the step-wise addition cant wraparound argument made in this thread, I had it cracked and I would make the proof smaller, but its actually worse and the wrong direction).

I was thinking of the algorithm due to Berry Schoenmakers (reference is unpublished) but Brands describes on p129 of chapter 3.

To adapt for that it's back to what I had in mind previously that I was not that happy about the efficiency of.   Crap.  

Now that I explained the thing prematurely (before its actually nicely efficient it turns out) here's the explanation of where it got to in the previous version.

Choose value x with encrypted value X=xG+yH (or X=g^x*h^y in non EC syntax) and then proof as described in Brands p129 writeup of Schoenmakers with reference to OR proof more easily comprehensible from .   However I temporarily thought t was 2, which would've been sweet.  So the only remaining flexibility with that algorithm is to minimize bitcoin precision its still currently say 40 bits with bitcoin price up to $500 and precision down to 1c US.  A bit close to the limit if the price rises.  Bitcoin actually uses 51 bits (1c precision up to $1m per coin).  Anyway you need to run that proof which involves 40 or 51 subproofs each including one schnorr proof (2x ECDSA sized at least because of the need to prove x_i = 0 or x_i = 1 by the generic OR construct of creating one real ZKP with fairly chose challenge c (eg c=H(params)) and then two forged/simulated ZKP with forged c1 and c2 transcripts where c = c_1+c_2 mod n).  I cant see that working out at under 14kB to 18kB, probably a lot worse.

[EDIT the general OR simulation argument should be one forged (c_1 calculated backwards from the simulated protocol run), then c = H(params) and finally c_2 chosen so that c = c_1+c_2 mod n and a real ZKP using c_2).]

There was also another unpublished idea by Schoenmaker I mentioned that is more direct but doesnt work in EC and with non-standard p, q choice.  There's also an idea to use number other than 2 in the \alpha = \sum \alpha_i 2^i mod q line by someone.  Lipmaa also has a clever idea involving a generic argument about subset sum "additive combinatorics and discrete logarithm base range protocols" but the advantage over Camenisch previous result is not game changing for our purposes.  There were dozens more; maybe I'll post a fuller list of papers later on.

Adam
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
How do you intend to do a proof that the upper two bits are zero?

I believe there is a description/footnote/reference from chapter 3 of Brands PhD thesis/MIT press book "rethinking public key infrastructures", which is available for free download.  Its a component of ZK range proofs or ZK less than.

http://www.credentica.com/the_mit_pressbook.html

I'll write it up presently otherwise.  (Going to be AFK for a bit).

Adam
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
How do you intend to do a proof that the upper two bits are zero?

I don't think anyone knows how to do an efficient ZK proof for things like less-than, or bit extraction, or mod x. There are generic ZK schemes like Pinocchio and TinyRAM but they're expensive. What do you have in mind?

Aye, aye I'm getting there give me a chance Smiley

The existing ZKP of less than use proof of a bit being 0 or 1 as a building block (thats why they are inefficient, they do that dozens of times depending on the ranges).  ie to prove x <= 5 = 101b modulo for illustration n=257 (9bit prime) they prove x=000000101b the prove first that x<100b by proving the top 6 bits are 0, and then the prove that xG-4G=01b <= 1 by proving the top 8 bits of xG-4G are 0.

Adam
full member
Activity: 126
Merit: 110
Andrew Miller
How do you intend to do a proof that the upper two bits are zero?

I don't think anyone knows how to do an efficient ZK proof for things like less-than, or bit extraction, or mod x. There are generic ZK schemes like Pinocchio and TinyRAM but they're expensive. What do you have in mind?
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
How do validating nodes sum input and output values to determine a fee? This is the only part that doesn't seem clear to me. Doesn't the network need to know sum(outputs) - sum(inputs)?

Unlike currently you would have to communicate fee explicitly and have it validate as the inputs summing to the output.  (It is shown in my second post where f is communicated while inputs a,b and outputs x (spend) and c (change) are in encrypted form and yet it can be audited that A+B=X+C+fG.

Adam
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
Will post more crypto level details [...] presently.

So if you assume the existence of a ZK proof of knowledge of x (with syntax x_i is the i'th bit of x starting from 0 for the LSB) then ZKPoK{(x):x_i=0} then we can check two most significant bits are 0 with ZKPoK{(x):x_255=0 and x_254=0} by using it twice.

This proof will work on values in some group, we use EC instance of schnorr group (like DSA, same keys, parameters, but simpler signature; DSA is a Schnorr signature variant that offers no advantages over the original AFAIK and many disadvantages that I mentioned in OP).  So we will call the value x where the top two bits must be 0 x_255 = x_254 = 0, and x_{253...x202} are the value in satoshis (same as existing precision) and the remaining values of x_{201..x0} are the private key.

xG will be public and is also the coin public key (slightly different from an existing bitcoin address public key).  Now people can verify that encrypted input values (A,B) add up to encrypted output values (X,C) and fee f, where X is encrypted spend, C is encrypted change and f is fee because A=aG, and A+B=X+C+fG because aG+bG=xG+cG+fG.  This enforces that a+b mod n = x+c+f mod n.  The sender must include ZKPoK{(a,b):a_255 = a_254 = 0}.  The sender must also encrypt x and send it to the recipient so that he can prove information about it when he in turn spends it.  f is public because anyone must be able to collect it and attach it to their address via a mining event.

When the recipient spends xG he will have to similarly prove that x_255 = x_254 = 0.

The reason we need two bits is because n is not a power of two.  Say for simplicity we say n = 250.  Now imagine a=3, b=1, but we have to watch out for fraud by adding n because a+b+n = 254, a+b+n mod n = 4, and x=127,c=126,f=1 so checking only the top bit one could forge value by n as a+b+n = x+c+f mod n   3+1 = 127+126+1 mod 250.  The attacker has increased his value by 250 (minus 1 fee) without using any values with msb != 0.  If we prove top two bits are 0 then we can prevent this attack for up to 3 inputs.  (Because 3*64<250; but not 4 input because 4*64>250).  So for greater than 3 inputs we also prove each intermediate calculation in a 3-ary expression tree also has two msbits=0.  eg where Z(x) is short hand for ZKPoK{(x):x_255=0 and x_254=0}, Z(a),Z(b),Z(c),Z(a+b+c),Z(d),Z(e),Z(f),Z(d+e+f),Z(g),Z(h),Z(i),Z(g+h+i),Z((a+b+c)+(d+e+f)+(g+h+i)) so as mentioned i OP there must be k+log3(k) pair of proofs for k inputs (pair because there is one for x_255=0 and one for x_254).

Finally notice that proving knowledge is a kind of signature so in principle you could pay to another persons existing balance, with their approval, rather than transfer control of a value to a new address.  ie say that your recipient already has a balance y and you want to add x to it for them, you disclose x to them (by encrypting it for their public key say) and then they can add it and therefore you can replace coin addresses by existing balances to save on signatures and keys.

eg ZKPoK[Y]{(a,b,x,y,c),f:a+b=x+c+f and Z(a) and Z(b) and Z(y)}, E(x)

where [Y] indicates an auxiliary signature of some information combined with the PoK.  So the sender is binding his spend of X to say it can be added to existing balance Y (where the sender does not know y). 

Now the block validation will allow the recipient to replace his balance with Y'=Y+X=(x+y)G.  As the sender encrypts the value x for the recipient he can now make onwards transfers.

Taint mixing is also possible (though not that cheaply at scale) by spending dust to some number of other users as a kind of ring-transfer.  (A ring-signature is a scheme where you can implicate without their permission other signers as possible originators of a signature where the originator wants to hide their identity among possible authors).  So thats a bit like coinjoin but you dont need the active collaboration of the other users.  If the E(values) are offchain (eg direct to the recipient) the recipient may or may not even be able to use the extra value, if he doesnt know the dust value (he can reject it by referring only to his previous balance, but that does not disprove that he could still have it in reserve - its hard to prove a negative.)  We could alternatively attach it to the chain (at some size cost) maybe even encrypted with proof that the recipient could decrypt it.  (It is easy to prove xG=xH where H=yG, y unknown to the prover, so EC Egamal could do a provably decryptable value, in which case the software can incorporate the coin and block the owner from using earlier versions).

(dust is a value as now that is considered small, but because x_{202..0} is random by the sender and not computable without DL ability it has to be communicated to the recipient in order to be of value to them).

With mining the original coin is of known value 25 bitcoins (and no dust), however the recipient can still spend it securely without people working out his current balance by elimination by keeping dust as change (which he may never spend as it is has truly negligible fractional nano satoshi values.)

Adam
legendary
Activity: 905
Merit: 1012
How do validating nodes sum input and output values to determine a fee? This is the only part that doesn't seem clear to me. Doesn't the network need to know sum(outputs) - sum(inputs)?
sr. member
Activity: 404
Merit: 362
in bitcoin we trust
I have been researching this for a few months on and off, because it seems like an interesting construct in its own right, a different aspect of payment privacy (eg from a user perspective or for auditable but commercially sensitive information if we expect commercial entities to use smart-contracts) but also that other than its obvious direct use it may enable the realization of some features that we have not thought of yet, or perhaps improve the efficiency of zerocoin like ideas (I dont see how yet, but it seems related).

The starting point is it is known in the literature that you can do additively homomorphic encryption, and there are also zero-knowldge proofs of less than.  (Proving E(a)+E(b)=E(a+b) is not enough you also have to prove that the attacker did not add n to his balance during the process as the addition is modulo n, the order of the group, not normal arithmetic.)  Its more efficient to do less than a power of 2, but arbitrary values are possible by composition (all values are buildable from power of 2 ranges after all).

Both of these (homomorphic add and ZK less than proofs) are based on established conservative crypto assumptions, however the generic ZKP of less than is big (number of digital signature sized proofs proportional to log(v) where v=log2(n/vmax)+1 = log2(n)-log2(vmax)+1, so in bitcoin log2(n) = 256, and vmax depends on the encoding, but there are 21million BTC < 2^51 satoshis.  And potentially also slow as it involves verifying v signatures.

Originally I was thinking that will work out to be embarrasingly slow and big (something like zerocoin) so I held of discussing if and until i could find a practical efficient method.  There is also an efficient approximate less than ZKP by Berry Schoenmakers (that he never bothered to write in a paper, natch).  However that seemed to me to be more non-standard assumption based on choosing non standard p & q for the Schnorr group and to also not work over elliptic curves and so not ECDSA (only Schnorr, of which DSA is a variant).

However finally I think I saw the missing step that the way bitcoin uses and validates coin values you only need to prove the two most significant bits of each coin is 0, and use an encoding which sets vmax<2^254 (ie increase bitcoin precision from 51 to 254 bits, less significant extra bits beyond the < 2^51 satoshis are the private key.  That gives encoding for 203 bits of security which is greater than the security of ECDSA over P256 which offers security of 128 bits.

And so there is then finally a non-embarrassing efficiency way to do it with EC-Schnorr signatures at the cost of only 2 ECS sigs (same cost and size as ECDSA) per input and output where #input < 4 and #output <4.  For #input>3 you nee to also show eg ZKPoK{(a+b+c,d):a+b+c<2^254,d<2^254} and same if #output>3.  So 2k+2log3(k) signatures for k inputs or outputs.  (3 because 2^254*3n.)

Btw there are good reasons to use ECS over ECDSA IMO its still conservative and simpler and anyway DSA is based on it.  Because its simpler (no *k^-1 step) its more flexible and easily supports multiparty (n of n) and even threshold signatures (k of n) which allows multisig in the space of one ECS signature (and without even disclosing the existance of k of n nor how big k or n is even), there are some arguments that ECS is more secure than ECDSA in its assumptions about hash properties.  To do multiparty with ECDSA is a research topic, even multiparty DSA is ridiculously complex and depends on the security of a homomorphic encryption instance big enough to hold temporary results involving powers of q eg the paillier cryptosystem with big keys, and threshold DSA on the even more complex Damgard-Jurik extended Paillier scheme.  The flexibility of ECS makes it more flexible for many things, eg the zero knowledge selective disclosure and blinding certification features of Brands certificates based on the representation problem (which is a kind of generalization of Pederson commitments, which is itself a generalization of schnorr).  There are a huge number of things you can do with Brands certificates towards smart contracts that preserve the privacy of the attributes of a person relying on smart contract by using ZK proofs of boolean formulae etc.  

Also for the cost of an extra signature per value you can even have unconditional value privacy.  (Ie a hypothetical all powerful entity able to perform discrete log with minimal effort still cannot tell how much money  you paid).  This is because like OTP all possible values are equally possible, eg with a pederson commitment two base points G, H then xG+yH there are n possible solutions for all possible values of x and y (where x is a secret key and y is a value you prove things about).  The powerful adversary can just solve and find all the possibilities but your public  recorded ZKPs do not show which x value you knew, nor which y was the value you transferred.

Will post more crypto level details and perhaps an openSSL based implementation presently.

Adam
Pages:
Jump to: