Pages:
Author

Topic: Invoices/Payments/Receipts proposal discussion (Read 24655 times)

legendary
Activity: 1526
Merit: 1129
A discussion of why the payment protocol should default to using BIP32 extended public keys instead of individual addresses, and a debunking of several common objections:

http://bitcoinism.blogspot.com/2014/01/business-accounting-and-bitcoin-privacy.html

Good article, thanks for writing it.
legendary
Activity: 1120
Merit: 1149
A discussion of why the payment protocol should default to using BIP32 extended public keys instead of individual addresses, and a debunking of several common objections:

http://bitcoinism.blogspot.com/2014/01/business-accounting-and-bitcoin-privacy.html

Or if you want to be able to reveal the address publicly without sacrificing privacy: Stealth Addresses
legendary
Activity: 1400
Merit: 1009
A discussion of why the payment protocol should default to using BIP32 extended public keys instead of individual addresses, and a debunking of several common objections:

http://bitcoinism.blogspot.com/2014/01/business-accounting-and-bitcoin-privacy.html
legendary
Activity: 3430
Merit: 3071
I was seeking clarification for this, and the reality is that this is no greater a danger to your privacy when paying than what everyone does today, and it increases the needed attack sophistication to replace a genuine merchant payment address with the address of an attacker.

The danger of any MITM attack with the Payments Protocol is not any privacy leak, but someone using a faked or stolen certificate in combination with further attack code infecting the webmerchant itself, all coming together to allow the attacker to supply their own address for the user to pay, instead of the address belonging to the webmerchant. The Payment Protocol could be tricked into validating the authenticity of the faked/stolen Certificate, and in turn incorrectly verifying the address as being one supplied by the webmerchant. Payment Protocol can't solve this problem, it's an issue with the inherent design of the CA system. But pulling it off successfully is more difficult to engineer than attacking the webmerchant payment methods being used today, hence the barrier to hijacking payments through html web pages are higher with Payments Protocol.

Personal information (like your real name, postal address and the Bitcoin address you pay from) is now and will continue to be transmitted to the merchant over plain old html, or using SSL encrypted htmls sessions if they're using currently considered best practice. What level of security the webmerchant uses to store this information on their servers is another factor. These things have nothing to do with Payment Protocol, it's between you and them: your browser and OS, their webstore software and webserver configuration.
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
I skimmed the whole tread.

Mike Hearn's FAQ helped explain the reasoning behind this: PKI is hard apparently.

I think the hostility to this stems from the fact that the CA system is known to be broken. An attacker will not try to change the "common name"; they will change the payment address: the very thing being hidden by this proposal.

With money on the line, attackers may even be well-funded, purchasing one of those corporate CA spoofing appliances in order to go after a high-value target.

I was also wondering why it was stated as fact that self-signed certificates allow the MITM attack. Later in the thread, somebody even pointed out that CAs themselves use self-signed certificates. The difficulty, hinted at in this link is that X.509 certificates do not allow the end-user to trust a specific CA for a specific domain. All CAs can sign for all domains: which makes me trying to be my own CA for *.economicprisoner.com a very bad idea.

In my experimentation with OpenPGP, I noticed that most people have shockingly lax certificate verification practices. They are confused when you ask for their public key: when you can simply search for their e-mail address or their short-key is included in their mail signature. They don't realize that it is trivial to generate collisions with both of those identifiers. There has essentially been a gentleman's agreement to not do that as far as I can tell. This tells me that even if a "bricks and mortar" business had the fingerprint of their self-signed cert prominently displayed: nobody would actually check it.

PS: To the people hung up on SSL: You can have Certs without using SSL.

TL;DR: I still don't like it, but don't really have a better suggestion at the moment. Namecoin would not work since the first person to grab a specific .bit domain can not be forced to surrender it in the event of a trademark dispute (unless you can get the holder in court somehow).

staff
Activity: 4172
Merit: 8419
No I am not saying that at all.
Then what you are saying makes no sense. If an unsigned receipt is valid then why would a signed receipt be invalid just because a CA was compromised?
staff
Activity: 4172
Merit: 8419
now my receipt is technically "invalided" cause they revoke that certificate.
uhhh.  So you're saying that every receipt that doesn't have a cryptographic signature on it (meaning basically every receipt which has ever been created in the history of mankind) is "invalid"?

The CA model is weak and lame and mildly exploitative. But what alternative are you suggesting?   This is an optional feature, and if it's used its certainly no worse than if its not used.

The protocol itself is specifically designed to be extensible to other authentication types, but at the moment there don't appear to be any actually useful alternatives, as they arise future extensions can add support for them... and you still have the option of not using the authentication.
legendary
Activity: 1498
Merit: 1000
Lets back up now, this why we have SSL, we can kinda prove that the page we had with the address came from the valid source we want to pay.

No you can't.  The authentication in SSL is ephemeral.  At no point do you ever possess a document signed by their key.

That is why I said kinda, but if you connected SSL to a site, and a CA says you are connected to the right server you can use that to prove that the page was coming from them. Yes of course Man the middle sets up a fake CA forces all other CA request to go thru him, then yes it isn't.
kjj
legendary
Activity: 1302
Merit: 1025
Lets back up now, this why we have SSL, we can kinda prove that the page we had with the address came from the valid source we want to pay.

No you can't.  The authentication in SSL is ephemeral.  At no point do you ever possess a document signed by their key.
legendary
Activity: 1120
Merit: 1149
And forget courts— though perhaps someday it might matter in those too— it's also about making the case in the court of public opinion. Certainly plenty of people have made fraudulent scam complaints (against business competitors or just for fun), strong evidence for a contract protects both parties and the public in general.

Real-world example: piuk told me that blockchain.info was moving to CoinJoin transactions for their send-shared service so that there would be transactions in the blockchain that could be used to help stop people from fraudulently claiming the service never forwarded their coins. (they don't keep any logs after all)

With the payment protocol they could have just gave the customer a signed receipt.
staff
Activity: 4172
Merit: 8419
2) Once the payment has been made, there's no proof that you actually made that purchase.
So I guess the blockchain is just for show and the signing/verifying messages will be removed now.
Currently, you can prove that you made a payment,  but you can't prove that the merchant owns the address that you sent the funds to.
Or what the terms of the agreement were.  "It was a donation!"

And forget courts— though perhaps someday it might matter in those too— it's also about making the case in the court of public opinion. Certainly plenty of people have made fraudulent scam complaints (against business competitors or just for fun), strong evidence for a contract protects both parties and the public in general.

Making strong cryptographic evidence of contracts is an important part of building infrastructure that enables people to freely contract without depending on things like courts to enforce their agreements. Less use of trust and subjective calls and more use of math.
member
Activity: 116
Merit: 10
2) Once the payment has been made, there's no proof that you actually made that purchase.

So I guess the blockchain is just for show and the signing/verifying messages will be removed now.

Currently, you can prove that you made a payment,  but you can't prove that the merchant owns the address that you sent the funds to.
sr. member
Activity: 358
Merit: 250
...

Ok, I'll try to summarize what the payment protocol is trying to solve (at least, the way I see it, correct me if I'm wrong).

Right now, when you want to buy alpaca socks, you go to the website, click on the product, click pay and then you're presented with a QR code or a link that you can use to make a payment to the merchant.

There are two problems with this.

1) There's no telling if that QR code or link isn't modified en route to your computer (man in the middle attack);
2) Once the payment has been made, there's no proof that you actually made that purchase.

The first one isn't even a truly big issue. Most likely the webpage you're visiting is already running over SSL. The second one however, proof of purchase, is a big one. If you want to file a dispute (and would want to take the merchant to small claims court), then you want to show the judge some kind of receipt. That's exactly what the payment protocol provides. A signed payment request (tied to the merchant's identity), containing the payment request and the address(es) that you sent your payment to.

That's the gist of it. It has other features, like you can supply a refund address when you make your payment and you optionally get an acknowledgement (but this is optional).

At no point in this communication is the customer required to identify himself. (other than the practical need to supply a shipment address for the product of course).

I'm sure there are more things possible, but that's in layman's terms what it does.

In two words: Consumer protection.

Finally, a rational real-world business use case for this protocol!

This will clear the courts of frivolous alpaca socks lawsuits. You know, the ones where someone buys alpaca socks, gets scammed by a man-in-the-middle attack, and then sues the online store in small claims court when the site claims they never received payment. Now we'll finally have a cryptographically-signed receipt so that we can subpoena the CA-certificate issuer and get to the bottom of the whole mess as soon as possible.

(sorry Riplin, couldn't resist. It is a good summary of the protocol though!)

legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Why trust CAs when you already have the most authoritative CA in the world: the blockchain

Please go and implement the code to turn the blockchain into a certificate authority, get Bitcoin-using businesses and people to use it as their primary identity, the one users use to also secure all other communications with those businesses and people, and get back to us so we can add it to the payment protocol specification.

Of course, that's already been done, Namecoin, but no-one uses it.

Wow, that's a good observation! I will have a look at Namecoin, actually I always wondered what is the practical use of Namecoin  Tongue
legendary
Activity: 1120
Merit: 1149
Why trust CAs when you already have the most authoritative CA in the world: the blockchain

Please go and implement the code to turn the blockchain into a certificate authority, get Bitcoin-using businesses and people to use it as their primary identity, the one users use to also secure all other communications with those businesses and people, and get back to us so we can add it to the payment protocol specification.

Of course, that's already been done, Namecoin, but no-one uses it.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Maybe it is not (anyway I don't think bitcoin will be used as a payment medium at large scale due to liquidity problem), but judging from the extensive amount of explanation in BIP70, I can't say it is a minor change
It's a new, additional thing, not a change. It's also pretty simple: BIP70 without the motivation boiler plate fits on a single page. Effectively you're faulting something for being well documented.  It's not well documented because its complex or risky, its well documented so that other people will have a maximally easy time implementing it correctly.

The implementation in bitcoin-qt is under 1000 lines of code, not counting UI message text. The patch between git where the payment request support was integrated in late July and now is 23,615 lines long, and 26,059 lines for the patch to go one further version back. The patch between current git and v0.8.5 is 114,129 lines long... so from a pure lines of code change complexity it's probably about 3% or so of what will be in 0.9 vs 0.8.5.  Not really a smart metric, but since it's largely a free standing feature the impact to other things is even smaller.

Even if I change one line of code I could cause a hard fork Wink 

Yesterday I read the core development update from bitcoin foundation, it is very straightforward but leaves a specific link to that Mike's post about this new feature. Judging from amount of posts that debate this new feature, it sure raised lots of new concerns about security and privacy

I worked with CAs some years ago and I know it is a group of greedy enterprises who try all the possible tricks to get people paying money for their certificate, it is very business oriented

http://en.wikipedia.org/w/index.php?title=Certificate_authority&action=view§ion=6#Providers

Why trust CAs when you already have the most authoritative CA in the world: the blockchain
kjj
legendary
Activity: 1302
Merit: 1025
And so the message must be sent to the user, unhashed but encrypted in an SSL session, between the merchant and the user. Otherwise the process described above couldn't work. Ok.

No, it doesn't need to be encrypted.  Encryption protects the privacy of the message.  If the parties wish to keep the communication private, it should be encrypted, but encryption isn't necessary for authentication, which is what DSA does.

Because any change to the message will invalidate the signature, the message has become tamper-proof.  You simply reject any messages that don't have a valid signature, so any messages that aren't rejected are known to have come from the signing key without changes.

Or in the case of someone who illegitimately obtains the private key and certificate of the merchant, someone you possibly or definitely cannot sue (successfully, anyway). I see though how this means zero direct communication with the CA for the user.... even the list of valid (and revoked) certificates must get updated with browser and/or OS updates, and so again, not directly from the CA.

That will generally leave a trail.

Also, we have CRLs and OCSP to help us keep track of which certificates are no longer valid, despite having a trusted signature.  CRL is a batched process that involves just downloading the current list from time to time (this is technically communication with the CA, but not of the sort that gets anyone riled up).  OCSP requires communication with the CA, but this can be done by the server instead of the client (stapled OCSP), and at no time involves sending message details to anyone (the protocol allows enough data to identify the certificate, only).

It's certainly the case that the 1-1 SSL connection where the unhashed Payment Details are sent to the user is a weak link. It would be interesting to know what can be done to mitigate the risk of this part of the scheme being eavesdropped on. Certainly the purpose is to prevent the information being changed by a MITM, but I understand it's still possible to read this information despite the SSL encryption. Or is this also wrong? The technology press has reported otherwise, but it would be nice to hear a different take on what the risks of this are.

Well, SSL certainly has weaknesses, but in general, SSL really does allow secure (private) communication.  But it turns out that PKI is really, really hard.  Corporate proxies can use wildcard certificates (signed by actual CAs in some cases) to actively MITM all SSL connections passing through.  Governments can do the same thing, and some have speculated that they might use (or have used) their power to, ahem, request that CAs sign bogus certs that look perfectly correct.  If you run into a situation like that, or if you are merely careless and fail to notice that you've been hijacked by a null-suffix certificate, you can end up having a totally secure communication with an attacker instead of whoever you thought you were talking to.

This comes up a lot in debates about how browsers should handle self signed certificates.  How does a user know that a certificate that has been presented is the right one unless a CA has signed it?  Unless the fingerprint has been sent out of band, and then actually checked, they have no way to know.  See here.
legendary
Activity: 3430
Merit: 3071
No, the payment details are needed for verification.

You sign by putting (private key, message) into the signing function, and you get back a signature.  You verify by putting (public key, signature, message) into the verification function, and you get back a boolean.

Inside both of those functions, the message is hashed, and the hash is used internally.*

The idea is that the merchant can say "I will ship item X to you, if Y bitcoins show up at address Z".  They then sign that message (which involves hashing the message and then, ahem, "multiplying"** the hash by their private key) and then they send you (public key [in certificate form], signature, message).

You then stuff (public key, signature, message) into the verification function, and it will hash the message, then "multiply" the message hash by the public key, and then check to see if the product it calculated bears the proper relationship to the signature**.  It then returns TRUE or FALSE.

At this point, if the verification returned TRUE, you know with certainty that the message in question was signed by the private key that corresponds to the public key you are looking at.

And so the message must be sent to the user, unhashed but encrypted in an SSL session, between the merchant and the user. Otherwise the process described above couldn't work. Ok.


Note that you explicitly do not know who signed the message yet.  This is where PKI comes in.  I wrote a longer post about PKI earlier, and I won't rehash it all.  But basically, you repeat the signature verification process, but this time the message to be checked is the certificate presented by the merchant, and the public key used for verification is from a certificate provided by someone that you explicitly (or implicitly) trust to only sign certificates that bear verifiable contact information tying the certificate to a real world entity (someone you can sue, basically).

Or in the case of someone who illegitimately obtains the private key and certificate of the merchant, someone you possibly or definitely cannot sue (successfully, anyway). I see though how this means zero direct communication with the CA for the user.... even the list of valid (and revoked) certificates must get updated with browser and/or OS updates, and so again, not directly from the CA.

When that is done, you now know that the message was actually signed by a real world entity which has been validated by the CA that you trust for such things.  The last step is to compare the entity information from the certificate against the entity information you were expecting to deal with.  Without this step, an attacker could potentially get a perfectly valid and verified certificate for "Bob's Malware Farm, LLC." and present it to you after hijacking your attempt to log in to your bank's website.  Hopefully, you'll notice that the certificate presented wasn't the "Global Megabank Savings and Theft" certificate that you were expecting and you won't give the attacker your account info.

It's certainly the case that the 1-1 SSL connection where the unhashed Payment Details are sent to the user is a weak link. It would be interesting to know what can be done to mitigate the risk of this part of the scheme being eavesdropped on. Certainly the purpose is to prevent the information being changed by a MITM, but I understand it's still possible to read this information despite the SSL encryption. Or is this also wrong? The technology press has reported otherwise, but it would be nice to hear a different take on what the risks of this are.
kjj
legendary
Activity: 1302
Merit: 1025
Ok, I'm following. So this means that the unhashed Payment Details are not required to verify the signature, contrary to my presumption before. Any light to shed on why the Payment Details are used to generate signatures at all? They themselves form no part of the verification. Is it useful (for security?) to have some piece of "unique to this transaction" data to generate the signature? I appreciate that the hashed Payment Details are not being sent to the CA, so if anything they're more vulnerable when sent SSL encrypted, but not hashed, to the user from the merchant (as well as from the user to the merchant in the first instance). There must be a good reason to hash the Payment Details and have them specifically signed and sent to the user for verification against the CA certificate (which the user already possesses locally via a web browser or their OS)!

Sorry about all this, but I'm quite capable of misinterpreting documents like BIP 70 that are pretty concisely defined in their descriptions, now that I can take another look having talked it over.

No, the payment details are needed for verification.

You sign by putting (private key, message) into the signing function, and you get back a signature.  You verify by putting (public key, signature, message) into the verification function, and you get back a boolean.

Inside both of those functions, the message is hashed, and the hash is used internally.*

The idea is that the merchant can say "I will ship item X to you, if Y bitcoins show up at address Z".  They then sign that message (which involves hashing the message and then, ahem, "multiplying"** the hash by their private key) and then they send you (public key [in certificate form], signature, message).

You then stuff (public key, signature, message) into the verification function, and it will hash the message, then "multiply" the message hash by the public key, and then check to see if the product it calculated bears the proper relationship to the signature**.  It then returns TRUE or FALSE.

At this point, if the verification returned TRUE, you know with certainty that the message in question was signed by the private key that corresponds to the public key you are looking at.

Note that you explicitly do not know who signed the message yet.  This is where PKI comes in.  I wrote a longer post about PKI earlier, and I won't rehash it all.  But basically, you repeat the signature verification process, but this time the message to be checked is the certificate presented by the merchant, and the public key used for verification is from a certificate provided by someone that you explicitly (or implicitly) trust to only sign certificates that bear verifiable contact information tying the certificate to a real world entity (someone you can sue, basically).

When that is done, you now know that the message was actually signed by a real world entity which has been validated by the CA that you trust for such things.  The last step is to compare the entity information from the certificate against the entity information you were expecting to deal with.  Without this step, an attacker could potentially get a perfectly valid and verified certificate for "Bob's Malware Farm, LLC." and present it to you after hijacking your attempt to log in to your bank's website.  Hopefully, you'll notice that the certificate presented wasn't the "Global Megabank Savings and Theft" certificate that you were expecting and you won't give the attacker your account info.

* The hash function forces the input to the actual signing function to be a known size, which is very useful to the signing function.  Hash functions aren't perfect, but the structure of the message makes it impossible to find working pre-images.  This is a subtle point, but all of our structured signatures would remain safe, even if the hashing function in the signing function were found to be generally insecure. 

** I'm doing a lot of handwaving here.  The signing function isn't really multiply, and I'm not going into detail on how signatures are verified in reality.  This is something you can look up if you want to, for the purposes of this discussion, we can just assume that these functions exist and work correctly.
staff
Activity: 4172
Merit: 8419
Maybe it is not (anyway I don't think bitcoin will be used as a payment medium at large scale due to liquidity problem), but judging from the extensive amount of explanation in BIP70, I can't say it is a minor change
It's a new, additional thing, not a change. It's also pretty simple: BIP70 without the motivation boiler plate fits on a single page. Effectively you're faulting something for being well documented.  It's not well documented because its complex or risky, its well documented so that other people will have a maximally easy time implementing it correctly.

The implementation in bitcoin-qt is under 1000 lines of code, not counting UI message text. The patch between git where the payment request support was integrated in late July and now is 23,615 lines long, and 26,059 lines for the patch to go one further version back. The patch between current git and v0.8.5 is 114,129 lines long... so from a pure lines of code change complexity it's probably about 3% or so of what will be in 0.9 vs 0.8.5.  Not really a smart metric, but since it's largely a free standing feature the impact to other things is even smaller.
Pages:
Jump to: