Archive for the ‘Cryptography’ Category

Some Thoughts On Secure Random Numbers

Sunday, October 30th, 2016

Many protocols and cryptographic primitives such as DSA rely on secure random numbers. For example, the Signal messenger and it’s quite intricate protocol was recently analyzed and a paper about the analyses published[1]. The authors wrote that they “do not consider faulty/malicious random number generators” in their threat model, but also state that if “[…] generated randomness is compromised or bad, Signal offers no protection”. Secure random numbers are the most common weakness. A compromised random number generator used widely even allows for bulk data collection and decryption[2].

In my mind this problem is relatively easy to avoid. For one, protocols can be modified to  not to leak raw output of the random number generator, which makes practical attacks on the rng much harder. For example, instead of sending a nonce consisting of raw rng output, which would give an eavesdropper information to determine the state of the rng, one could use the sha512 of the rng output. To ensure that this is correctly implemented the protocol participants can exchange the raw rng output later once secure communication has been established and ensure that indeed the nonce that was used was sha512(rng) and not raw rng output itself. For primitives that require random numbers for operation (e.g., signing) one can use the secure hash of the message and private-key for a unique, deterministic and secure random number (this is used by ed25519, IIRC). My point is that most of these problems seem avoidable and systems can be hardened to not completely fail even if the random number generator has been compromised.

The other point is that random numbers should be generated by the OS and incorporate as many different sources as possible. Linux, OpenBSD and other operating systems offer APIs that return secure random numbers collected from various sources of entropy in the system. The latest generation of x86 chips even has RNG instructions that generate secure random numbers (caveats: [3,4]). I think the more entropy from difference sources the better (caveat: [5]), and personally I think systems like HAVEGED can be helpful as an additional source of entropy. One counter argument that I heard was that nobody knows how good this cpu entropy really is, and that entropy extracted from hard-disk latency (and similar). However, for other sources of entropy we don’t know that either. The most sensible paper showing entropy extraction from disk drive latency[6] shows that technically only for certain drives, and I’m willing to wager that all of these results will look quite differently for modern SSD drives. As far as I know quantifying the goodness of entropy sources on computers is an open research problem. I think adding entropy from cpu sources like HAVEGED does is helpful, especially in situations like device startup and other situations where there’s not a lot of entropy available from other sources.

References

[1] Katriel Cohn-Gordon, Cas Cremers, Benjamin Dowling, Luke Garratt, and Douglas Stebila, “A Formal Security Analysis of the Signal Messaging Protocol”, https://eprint.iacr.org/2016/1013.pdf

[2] Daniel J. Bernstein, Tanja Lange, Ruben Niederhagen. “Dual EC: a standardized back door.” Pages 256–281 in The new codebreakers: essays dedicated to David Kahn on the occasion of his 85th birthday, edited by Peter Y. A. Ryan, David Naccache, Jean-Jacques Quisquater. Lecture Notes in Computer Science 9100, Springer, 2015. ISBN 978-3-662-49300-7.

[3] https://plus.google.com/+TheodoreTso/posts/SDcoemc9V3J

[4] Becker et.al., CHES’13 Proceedings of the 15th international conference on Cryptographic Hardware and Embedded Systems,Pages 197-214, DOI 10.1007/978-3-642-40349-1_12, http://sharps.org/wp-content/uploads/BECKER-CHES.pdf

[5] http://blog.cr.yp.to/20140205-entropy.html

[6] D. Davis, R. Ihaka, and P. Fenstermacher, “Cryptographic Randomness from Air Turbulience in Disk Drives,” Advances in Cryptology — CRYPTO ’94 Proceedings, Springer-Verlag, 1994, pp. 114–120.

License Key Copy Protection

Saturday, February 2nd, 2008

I had written long before I had a blog about doing copy protection the right way (read: so it requires some effort to remove it). With the more recent programing frameworks a couple of things changed. For one, all the new programing frameworks (.NET, Java) have cryptography support, which means it is far simpler to incorporate a license key scheme based on digital signatures. Personally I like using 512bit-DSA (Digital Signature Algorithm) for these purposes, because the key is long enough to stop amateurs from computing the secret key and short enough that a signature encoded as BASE32 could be typed in by someone. In the software you obviously only include your public key for verifying the signature of your license.

One issue with byte-code languages is that they are fairly trivial to disassemble (and produce very human readable code). Microsoft even included an MSIL Disassembler with .NET. Therefore the code needs to be obfuscated. One thing to try (not all obfuscater-programs support this) is to make the names as human-unfriendly as possible. Renaming classes to “A” and “B” is nice, but renaming them to “XfGkoAlPPqzz” and “XfGkoBlPPqzz” makes it even harder to read.

With a signature-based license key the hackers have only a few options left since they can’t write a key-generator. For one, they could patch a different key into the software for which they know the private key and generate signatures for this new key. It’s therefore important to use a hash of the real private key in some other ways in the program. For example, one could hash the private-key string with SHA/MD5 and use the result as a key for decrypting some data with a symmetric cypher such as AES. Another idea is to put the hash (or a checksum using the hash of some data and the public key) in some saved user data in order to prevent data-exchange between the cracked and legitimate versions of the software.

The second option is to find where the accept/reject decision is made in the program and to patch this comparison. Note that API-calls to the framework are fairly easy to find, even in obfuscated code. You should therefore have the API-call (for the crypto-functions as well as displaying any kind of error-message like “invalid license key”) far away from the comparison-operation so the hackers have to dig through more code. Also, I found that using a command-pattern makes for fairly unintelligible byte-code. Consider creating a class that encapsulates commands and has some Execute() method that can also return which command to execute next. Now, if you have a couple of commands in a List that are executed in a loop by invoking all the respective Execute() methods, then that is a bit harder to follow. Consider writing commands that can change entries in some global hash-table of strings, an If-Command, a Goto-Command, a MessageBox-Comand, a Verify-Signature-Command etc. With all those implemented, you can encode a little program by putting all those commands in a list. The important thing now is to encode the accept/reject decision (and one or two important parts of your software unrelated to the license-key) using the Command-Pattern, i.e. to write little programs in your “command-pattern-programing-language”. The If-Statement being used for the accept/reject will then be the same spot every other condition is tested in your code and can therefore not be patched without destroying other parts of the program. This forces the hackers to understand your command-pattern and to figure out what bytes they have to change to make this an unconditional jump. I found this fairly easy, clean and straight-forward to program (and debug) in a high-level language, yet fairly hard to understand in obfuscated byte-code.