In my mind this problem is relatively easy to avoid. For one, protocols can be modified to not to leak raw output of the random number generator, which makes practical attacks on the rng much harder. For example, instead of sending a nonce consisting of raw rng output, which would give an eavesdropper information to determine the state of the rng, one could use the sha512 of the rng output. To ensure that this is correctly implemented the protocol participants can exchange the raw rng output later once secure communication has been established and ensure that indeed the nonce that was used was sha512(rng) and not raw rng output itself. For primitives that require random numbers for operation (e.g., signing) one can use the secure hash of the message and private-key for a unique, deterministic and secure random number (this is used by ed25519, IIRC). My point is that most of these problems seem avoidable and systems can be hardened to not completely fail even if the random number generator has been compromised.

The other point is that random numbers should be generated by the OS and incorporate as many different sources as possible. Linux, OpenBSD and other operating systems offer APIs that return secure random numbers collected from various sources of entropy in the system. The latest generation of x86 chips even has RNG instructions that generate secure random numbers (caveats: [3,4]). I think the more entropy from difference sources the better (caveat: [5]), and personally I think systems like HAVEGED can be helpful as an additional source of entropy. One counter argument that I heard was that nobody knows how good this cpu entropy really is, and that entropy extracted from hard-disk latency (and similar). However, for other sources of entropy we don’t know that either. The most sensible paper showing entropy extraction from disk drive latency[6] shows that technically only for certain drives, and I’m willing to wager that all of these results will look quite differently for modern SSD drives. As far as I know quantifying the goodness of entropy sources on computers is an open research problem. I think adding entropy from cpu sources like HAVEGED does is helpful, especially in situations like device startup and other situations where there’s not a lot of entropy available from other sources.

References

[1] Katriel Cohn-Gordon, Cas Cremers, Benjamin Dowling, Luke Garratt, and Douglas Stebila, “A Formal Security Analysis of the Signal Messaging Protocol”, https://eprint.iacr.org/2016/1013.pdf

[2] Daniel J. Bernstein, Tanja Lange, Ruben Niederhagen. “Dual EC: a standardized back door.” Pages 256–281 in The new codebreakers: essays dedicated to David Kahn on the occasion of his 85th birthday, edited by Peter Y. A. Ryan, David Naccache, Jean-Jacques Quisquater. Lecture Notes in Computer Science 9100, Springer, 2015. ISBN 978-3-662-49300-7.

[3] https://plus.google.com/+TheodoreTso/posts/SDcoemc9V3J

[4] Becker et.al., CHES’13 Proceedings of the 15th international conference on Cryptographic Hardware and Embedded Systems,Pages 197-214, DOI 10.1007/978-3-642-40349-1_12, http://sharps.org/wp-content/uploads/BECKER-CHES.pdf

[5] http://blog.cr.yp.to/20140205-entropy.html

[6] D. Davis, R. Ihaka, and P. Fenstermacher, “Cryptographic Randomness from Air Turbulience in Disk Drives,” Advances in Cryptology — CRYPTO ’94 Proceedings, Springer-Verlag, 1994, pp. 114–120.

]]>]]>

]]>

Linux introduced /dev/random, a block device to a random number generator in the kernel. The generator keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool, random numbers are created by using a secure hash function on the data in the pool . When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. The Operating System adds entropy from all sorts of sources, such as disk latency. However, if you are running Linux on an embedded device (or “internet of things”) or in a cloud instance, there may not be a “real” disk drive, and whatever latency is measured may not contain good enough entropy. Another important source of entropy is the clock-cycle timer (rdtsc) that is called at various instances to measure the time between interrupts and other frequently and somewhat randomly occurring events. The rdtsc instruction is often virtualized in cloud-machines which could make it harder to get good entropy. Anyways… the problem of getting random numbers in low-entropy situations is important enough that Intel added an on-chip random number generator to their new processors (rdrand), but many cloud instances emulate older CPUs and embedded system don’t necessarily run with x86 chips that have rdrand.

So far, so good. For now, I don’t have an embedded system to experiment with, but a similar problem is with servers in the cloud. Can the rdtsc instruction still be used to get entropy in a virtual machine? I found this, played around a bit with the program listed and also learned about the existence of haveged (I’ll get back to that). I started a micro-instance (1 CPU, 512Mb RAM) in a popular cloud service and found that, yes indeed, the rdtsc instruction seems to be virtualized. The average tick count between millions of two consecutive rdtsc instructions is far too small to account for whatever else is going on on the machine (unless, of course, I somehow got a 512MB machine all by myself – which is unlikely). On the other hand, the output of the least significant bit of the difference of two consecutive rdtsc calls looked pretty random, but didn’t pass randomness tests. Adding von-Neumann whitening on the LSB obtained from two rdtsc calls is actually sufficiently random to pass FIPS 140-2 randomness tests (I used the implementation in rngtest) and only fails about 1 in 1000 (comparison: a test-run with entropy from /dev/random failed 1:10000). So in theory, this should be okay to use as an entropy source, but maybe it should still be combined with other sources of entropy.

Coming back to other already existing solutions. It turns out there’s an entropy gathering demon called haveged that can add entropy to the pool to remedy low-entropy conditions that can occur in servers, but possibly also on embedded devices and in cloud instance. The method exploits the modifications of the internal volatile hardware states as a source of uncertainty. Modern superscalar processors feature various hardware mechanisms which aim to improve performance: caches, branch predictors, TLBs and much more. The state of these components is also volatile and cannot be directly monitored in software. On the other hand, every invocation of the operating system modifies thousands of these binary volatile states. The general idea of extracting entropy from rdtsc still works, however the only thing that still makes me wonder a bit is that the full timer (not just the LSB) is used and that the whitening method is rather complex.

I learned a lot, but there’s still more to explore. Cloud machines probably have more going on than embedded systems, so I’m still not convinced that this will work on embedded devices. I’ll try to wrap my head around the whitening function in haveged. It’s still being developed and maybe ARM support will be added some day. Then the demon could be used on smart phones to improve the security of the random number generation.

]]>

The Fundamental Clustering Problem Suite (Ultsch, 2005) contains several synthetic data sets and a variety of clustering problems that algorithms should be able to solve. The data set contains the data and ground-truth labels as well as labels for k-Means, Single-Linkage, Wards-Linkage and a Self-Organizing map. I decided to play around with it a bit, converted the SOM activation matrix to labels using k-Means, added Spectral Clustering and Self-Tuning Spectral Clustering, and an EM Gaussian Mixture model. I was particularly curious how well Spectral Clustering would do. Determining the true number of clusters from the underlying data is an entirely different problem. Hence in all cases the number of clusters was specified unless otherwise noted. The regular Spectral Clustering used the Ng-Jordan-Weiss algorithm with a kernel sigma of 0.04 after linear scaling of the inputs. The Self-Tuning Spectral Clustering used k=15 nearest neighbors. I also used Random Forest’s “clustering”, an extension of the classification algorithm that will generate a distance matrix from the classification tree ensemble. The distance matrix was then used in single linkage to obtain cluster labels. Random Forest is a special case as it derives a distance matrix out of classification using gini – a metric built for classification, not clustering.

**Results**

The Atom Data set contains two clusters, essentially two balls contained within each other. The difficulty is the difference in variance and that the classes are not linearly separable. Not surprisingly, k-Means can not separate these clusters out as it would somehow have to choose the same mean for both clusters (the center of both balls). Single linkage just works. I’m a bit puzzled why Wards Linkage fails. SOM can not separate the clusters (could this be an artifact of using k-Means to cluster the activation matrix?). Spectral Clustering was constructed specifically to deal with these kind of cases. The failure to separate the balls could stem from me using a constant kernel sigma on all data sets. Using the localized scaling of self-tuning spectral clustering just works. EM can’t model this as well due to the mean (center) of the distributions being the same. Random Forests just fail, because the distance matrix that is generated from the random permutation of the data to obtain a second class for “classification” and distance matrix generation will generate two overlapping classes.

The Chainlink Data set contains two clusters, essentially two interlocked rings that are not linearly separable. K-Means has no way of separating this as there is no point the mean could be placed resulting into a correct label assignment. Single Linkage just works and connects all the nearest neighbors to clusters. I’m not sure why Ward’s Linkage fails. SOM can not separate the clusters (could this be an artifact of using k-Means to cluster the activation matrix? In the original paper the algorithm solved this. Maybe I did something wrong). Spectral Clustering was made for these kind of manifold cases. I’m a bit surprised how well EM works in this case. I would have expected a similar result to k-Means.

The Engytime data set contains two Gaussian mixtures that are fairly close to each other. This data is primed for EM style algorithms and indeed EM performs best on this data set. K-Means performs reasonably well and picks up on the different variance. Single Linkage simply connects all the points to one big cluster and leaves two outliers to form the second clusters. I’m a bit surprised by the SOM and have no explanation why the clusters we separated the way they were. I would have thought that Gaussian Mixtures would have performed better on this one. Given that the data was generated using two Gaussians this should have been a home run for the algorithm.

Golfball contains no clusters and is one giant blob. We choose k=6 in order to see what the algorithms would do. Single linkage is the only algorithm that produces a somewhat sensible result. It connects all points to one cluster and then leaves 5 points to form the remaining clusters. It would be evident from a dendrogram that there are no clusters in this data set. All other algorithms assign labels one way or the other, most produce evenly sized areas on the ball.

Hepta contains clearly defined clusters with different variances. The clusters are clearly separated and hence all algorithms have no problems separating these clusters. SOM mislabels a few cases and I’m not sure why this is.

Lsun contains different variances and inter cluster distances. The hard part is to separate out the cluster on the bottom.

Target contains two clusters in the middle and a few outliers. The data is not linearly separable, but it’s also interesting to see how the algorithms deal with the outliers.

Tetra contains four dense, almost touching clusters.

The two diamonds data set contains two touching clusters. The cluster borders defined by density. As one would expect, single linkage fails on this data set and simply lumps everything together. Wards Linkage was made to prevent exactly this kind of problem and not surprisingly performs better. The other algorithms have no problems picking up on the two dense blobs and separate them out perfectly or close to perfect.

The Wingnut data set contains two blocks and examines the density vs. distance trade-off of the algorithms. Every method that uses distance (or something that could be interpreted as such) will have the clusters “bleed” into each other.

**Note** that my results for SOM differ a bit from Ultsch’s original paper; quite possibly I did something wrong. I haven’t figured out yet what went wrong; still… fun weekend

*References*

**Ultsch, A.**: Clustering with SOM: U*C, In *Proc. Workshop on Self-Organizing Maps, Paris, France*, (2005) , pp. 75-82

]]>

Basically, he built a classifier predicting from some innocuous (but possibly correlated variables) the likelihood of somebody having a felony offense. The classifier isn’t meant to be used in practice (from eye-balling the Precision/Recall curve in the talk slides, I estimate an AUC of about 0.6-ish; not too great), but it was built to start a discussion. It turns out that courts have upheld the use of profiling in some cases as “reasonable suspicion,” a legal standard for the police to stop somebody and investigate. This could lead to “predictive policing” being taken even further in the future. Due to the model outputting a score Jim also discusses the trade-off of where the prediction of such a model may be actionable – he calls it the Tyranny/Anarchy Trade-Off (a catchy name

Having done statistical work in criminal justice before, I think predictive analysis can be helpful in many areas of policing and criminal justice in general (e.g., parole supervision). On the other hand, I find profiling and supporting a “reasonable suspicion” from statistical models unconvincing. I think the courts will have to figure out a minimum reliability standard for such predictors, and hopefully they’ll set the threshold far higher than what the ‘felony classifier’ is producing. There’s just too many ways using a statistical model for “reasonable suspicion” to go wrong. Even if variables of protected classes (gender, ethnicity, etc.) are not used directly, there may be correlated variables (hair-color, income, geographic area) as discussed in the talk Jim gave. Even more problematic in my mind would be variables that do not or hardly ever change, as they would lead to the same people being hassled over and over again. Also the training data from which these models are built is biased since everybody in it by definition has been arrested before. It’s beyond me how one can correct for this sample bias in a reliable way. Frankly, I don’t think policing by profiling (statistical or otherwise) can be done well, and hopefully courts will recognize that eventually.

]]>

Uninstalling the AVG toolbar component finally solved the Chrome start-page problem for me.

Man, what I piece of work. I’m not alone thinking that AVG did something bad for the user here.

]]>