I think a deterministic k-value will be less prone to errors (like the Android repeat k-value bug). However, RFC6979 seems more complex than it needs to be. Why can't I just take the 256-bit private key, concatenate it with the 256-bit hash I'm about to sign, apply SHA256 to the resulting 512-bit integer
k = sha256(private_key || hash_to_sign)
and (assuming 0
I don't believe it would be "wrong" to do that but I would use HMAC-256(private_key, hash_to_sign) over sha256(private_key || hash_to_sign) for a variety of reasons. More on that below. The author of RFC 6979 took a very cautious approach and the multiple rounds of HMAC combined with initialization are designed to harden against the leakage of information in the event that partial compromises of the underlying algorithm occur in the future. Good security looks at not just today's attack vectors but likely future attack vectors based on a detailed understanding of how the security of the algorithm is likely to be degraded through cryptanalysis. I know enough to know "
don't roll your own crypto". The standard is made more complex because it is designed to handle any HMAC, any digest size, any key, any curve parameters, and any message digest. For Bitcoin all of those are static many of the edge cases are not possible. This allows you to refactor the standard to produce a simplified one which only works with "Bitcoin" (256 bit hash, 256 bit private key, secp256k1 curve, HMAC-SHA256). If you take a simple implementation and add in the various hardenings I doubt it will be much simpler than a refactored RFC6979.
Why HMAC-hash over a simple hash? HMACs are not vulnerable to length extension attacks, and they can reduce the collision vulnerability of the underlying hash. As an example MD5 is cryptographically broken, but HMAC-MD5 is still robust with no preimage or collision attacks (fastest possible attack is brute force).
I wouldn't recommend anyone use HMAC-MD5 just thinking ahead to the day when SHA-2 might be the next MD5.One reason to use RFC6979 instead of some "roll your own" is repeatability. If your hardware device implements RFC6979 an outsider can audit it by having the hardware and another RFC6979 compatible client generate the same transactions and comparing signatures. If a device uses deterministic signatures AND implements BIP32 with a user supplied seed it becomes a lot more difficulty to build in backdoors to steal from the user. While any custom implementation could also be tested if you have five different implementations that just means five times the work to test them all. We could come up with a simplified deterministic signature protocol a BIP and encourage all developers to standardize around that, however even if successful it is very unlikely that "BIP XYZ" will ever be implemented in any major crypto libraries. RFC6979 is making some progress on that front. Bouncy castle currently supports it and with enough adoption, OpenSSL, .Net Framework, mono, etc will as well. This will mean a wider community looking at the codebase and more eyes is always better.
Simple version: When I look at an existing standard I look at it from "
Is there any reason why this standard can't be used?" For RFC6979 the only possible answer is "it is too complex" and for me that isn't sufficient to throw it out in favor of another standard. Most users (or even developers) will not be implementing RFC6979 by hand, they will hopefully be using well tested libraries. Test vectors and the deterministic nature combined with the avalanche effect make it a remote chance you could "get it wrong" and still pass unit tests.
Of course this topic wouldn't be complete without: