I've recently come across something called "functional encryption" and thought it might have useful applications for solving trust problems in complex transactions (almost any situation requiring a third-party to provide a service.)
holder to learn a specific function of encrypted data, but learn nothing else about the data. For example,
given an encrypted program the secret key may enable the key holder to learn the output of the program
on a specific input without learning anything else about the program.
So my understanding of this is that you can take any function F, convert it to a functional encryption scheme complete with public key cryptography, and use it to compute arbitrary outputs on a given set of input(s) WITHOUT revealing any of the internal workings of F (not to mention your inputs)? So for example, you could define a function that adds 10 to an input and all of the logic for the program (including it's execution would be encrypted.) Now suppose for a moment I've understood this correctly (and I probably haven't lel), but couldn't this all be used by third-party mediation services to provide verifiable proof of the signing services they provide for Bitcoin transactions? Essentially, you would create an arbitrary function that has transaction validation logic inside including an ECDSA private key for signing transactions, turn it into a functional encryption scheme - and bam - you now have a deterministic, third-party service WITHOUT trust. You could then throw away the private key completely and give away the functional encryption scheme and it would be limited to providing the service it defines. Now if you were to make a new deposit into an address protected with that ECDSA priv key, you would know -before hand- exactly what services the address provided, indeed, what it -could- reasonably provide (which would help eliminate any blackmail attacks on your money as well as safe-guarding against the service's disappearance. )
Other then that, I can't comment about the practicality. I have heard it's slow. How slow does it need to be to be broken though? Even if it took a whole week for output from one of these functions to be returned, the result would still be a decentralized, self-contained service, without the shoddiness of third-party trust but vastly improved security. What other limitations are there? Aside from the speed, is the program itself executed in a way where it can be reversed engineered? I mean, if we have homomorphic encryption, maybe "functional encryption" is exactly what it sounds and we can start making more secure services.
Anyway, time to sleep. Let me know if these ideas are going any where