Archive for the ‘Statistics’ Category

Random Number Generation on low entropy computer systems

Monday, October 13th, 2014

Lately, I got interested in random number generation. Generating random numbers securely so they are suitable for cryptographic use is hard.

Linux introduced /dev/random, a block device to a random number generator in the kernel. The generator keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool, random numbers are created by using a secure hash function on the data in the pool . When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. The Operating System adds entropy from all sorts of sources, such as disk latency. However, if you are running Linux on an embedded device (or “internet of things”) or in a cloud instance, there may not be a “real” disk drive, and whatever latency is measured may not contain good enough entropy. Another important source of entropy is the clock-cycle timer (rdtsc) that is called at various instances to measure the time between interrupts and other frequently and somewhat randomly occurring events. The rdtsc instruction is often virtualized in cloud-machines which could make it harder to get good entropy.  Anyways… the problem of getting random numbers in low-entropy situations is important enough that Intel added an on-chip random number generator to their new processors (rdrand), but many cloud instances emulate older CPUs and embedded system don’t necessarily run with x86 chips that have rdrand.

So far, so good. For now, I don’t have an embedded system to experiment with, but a similar problem is with servers in the cloud. Can the rdtsc instruction still be used to get entropy in a virtual machine? I found this, played around a bit with the program listed and also learned about the existence of haveged (I’ll get back to that). I started a micro-instance (1 CPU, 512Mb RAM) in a popular cloud service and found that, yes indeed, the rdtsc instruction seems to be virtualized. The average tick count between millions of two consecutive rdtsc instructions is far too small to account for whatever else is going on on the machine (unless, of course, I somehow got a 512MB machine all by myself – which is unlikely). On the other hand, the output of the least significant bit of the difference of two consecutive rdtsc calls looked pretty random, but didn’t pass randomness tests. Adding von-Neumann whitening on the LSB obtained from two rdtsc calls is actually sufficiently random to pass FIPS 140-2 randomness tests (I used the implementation in rngtest) and only fails about 1 in 1000 (comparison: a test-run with entropy from /dev/random failed 1:10000). So in theory, this should be okay to use as an entropy source, but maybe it should still be combined with other sources of entropy.

Coming back to other already existing solutions. It turns out there’s an entropy gathering demon called haveged that can add entropy to the pool to remedy low-entropy conditions that can occur in servers, but possibly also on embedded devices and in cloud instance. The method exploits the modifications of the internal volatile hardware states as a source of uncertainty. Modern superscalar processors feature various hardware mechanisms which aim to improve performance: caches, branch predictors, TLBs and much more. The state of these components is also volatile and cannot be directly monitored in software. On the other hand, every invocation of the operating system modifies thousands of these binary volatile states. The general idea of extracting entropy from rdtsc still works, however the only thing that still makes me wonder a bit is that the full timer (not just the LSB) is used and that the whitening method is rather complex.

I learned a lot, but there’s still more to explore. Cloud machines probably have more going on than embedded systems, so I’m still not convinced that this will work on embedded devices. I’ll try to wrap my head around the whitening function in haveged. It’s still being developed and maybe ARM support will be added some day. Then the demon could be used on smart phones to improve the security of the random number generation.

 

 

Strengths and Weaknesses of various Clustering Algorithms

Sunday, April 27th, 2014

Clustering is one of the most fascinating areas in the data mining field for me. For example, it can be shown that under some simple and fairly straightforward it is impossible to find a clustering function (Kleinberg’s impossibility theorem for clustering).That means that some trade-offs and relaxations have to be made in all the existing clustering algorithms. Every clustering algorithm will have a weakness and fail to cluster some data sets in a sensible manner.

The Fundamental Clustering Problem Suite (Ultsch, 2005) contains several synthetic data sets and a variety of clustering problems that algorithms should be able to solve. The data set contains the data and ground-truth labels as well as labels for k-Means, Single-Linkage, Wards-Linkage and a Self-Organizing map. I decided to play around with it a bit, converted the SOM activation matrix to labels using k-Means, added Spectral Clustering and Self-Tuning Spectral Clustering, and an EM Gaussian Mixture model. I was particularly curious how well Spectral Clustering would do. Determining the true number of clusters from the underlying data is an entirely different problem. Hence in all cases the number of clusters was specified unless otherwise noted. The regular Spectral Clustering used the Ng-Jordan-Weiss algorithm with a kernel sigma of 0.04 after linear scaling of the inputs. The Self-Tuning Spectral Clustering used k=15 nearest neighbors. I also used Random Forest’s “clustering”, an extension of the classification algorithm that will generate a distance matrix from the classification tree ensemble. The distance matrix was then used in single linkage to obtain cluster labels. Random Forest is a special case as it derives a distance matrix out of classification using gini – a metric built for classification, not clustering.

Results

(more…)

Big Data, Predictive Policing and the Tyranny/Anarchy Trade-Off

Saturday, August 17th, 2013

Bloomberg had an interesting article today about a talk on the implications and privacy trade-offs of predictive policing and profiling. Jim Adler’s “felon classifier” is also described in his blog.

Basically, he built a classifier predicting from some innocuous (but possibly correlated variables) the likelihood of somebody having a felony offense. The classifier isn’t meant to be used in practice (from eye-balling the Precision/Recall curve in the talk slides, I estimate an AUC of about 0.6-ish; not too great), but it was built to start a discussion. It turns out that courts have upheld the use of profiling in some cases as “reasonable suspicion,” a legal standard for the police to stop somebody and investigate. This could lead to “predictive policing” being taken even further in the future. Due to the model outputting a score Jim also discusses the trade-off of where the prediction of such a model may be actionable – he calls it the Tyranny/Anarchy Trade-Off (a catchy name 🙂

Having done statistical work in criminal justice before, I think predictive analysis can be helpful in many areas of policing and criminal justice in general (e.g., parole supervision). On the other hand, I find profiling and supporting a “reasonable suspicion” from statistical models unconvincing. I think the courts will have to figure out a minimum reliability standard for such predictors, and hopefully they’ll set the threshold far higher than what the ‘felony classifier’ is producing. There’s just too many ways using a statistical model for “reasonable suspicion” to go wrong. Even if variables of protected classes (gender, ethnicity, etc.) are not used directly, there may be correlated variables (hair-color, income, geographic area) as discussed in the talk Jim gave. Even more problematic in my mind would be variables that do not or hardly ever change, as they would lead to the same people being hassled over and over again. Also the training data from which these models are built is biased since everybody in it by definition has been arrested before. It’s beyond me how one can correct for this sample bias in a reliable way. Frankly, I don’t think policing by profiling (statistical or otherwise) can be done well, and hopefully courts will recognize that eventually.

 

On Statistical Significance Testing

Tuesday, March 26th, 2013

I found a good read about the fallacies of statistical significance testing. See also here.

Frequentist vs. Bayesian Inference

Monday, November 19th, 2012

I found a good article discussing the difference between the Frequentist and Bayesian approach to Inference.

Classification with inputs that change over time – P2P Loan Data

Saturday, October 6th, 2012

Predicting whether a loan will default or not is a tricky task. It may involve many variables, incomplete information and is a task that involves time as a component. Loans may also perform for a while before they default. Some loans may even be late, but recover back to the regular payment schedule. It’s an interesting application for statistics.

The LendingClub website, a service offering peer-to-peer lending, offers an interesting data set: historical data of loan performance as well as data for new loans. I’ve been playing around a bit with the data and built a model to predict whether a loan is a good investment. The LendingClub data is available for download. A data dictionary can be found on the website also.

First we need to define the outcome we want to predict. A loan can be in several states, some being “current”, others being “defaulted”, “late” or even on a “performing payment plan”. Conservatively, I defined all loans that were not “paid off” as bad. Loans that are “current” were excluded as they still can default in the future. Loans that are “late” are considered bad, because the borrower run into problems. The model I’m trying to built is basically for a conservative investor looking for loans that will simply be paid back without a hitch. With the usual statistical techniques a model can be built and the performance can be measured by 10-fold cross-validation or evaluating the model on a hold-out set. The real result of a prediction will of course only be available after about 3 years when a loan is fully paid off. As measure to optimize I chose the AUC metric. A 10-fold cross-validation estimates the performance of my model at 0.698 which is not too bad. The predictions implicitly make a few assumptions. The first one being that future performance of loans will be similar to historical performance of similar loans. I’m assuming a stationary distribution and the IID assumption – which is not completely true in reality, but hopefully close enough 🙂 Also, inflation expectations were not taken into account, but I’m limiting my model to 36 month loans to make that more manageable.

I won’t go into the details of how I encoded the variables and what variables I’m using. I discovered that I can extract information out of the textual variables in the loans. The “Loan Description”, a free text field where potential borrowers can leave comments or answer questions, is quite predictive. The difficult part is using that information in practice. A loan is in “funding state” for two weeks were investors can ask questions and invest in the loan. Many loans get fully funded before the two week period is over, some without any question or comment on the loan. New information may become available in the Loan Description field that may change the classification. That means, however, that the prediction may change over time – positively or negatively – after an investment decision has been. Not ideal, but the variables are quite powerful so I’m still looking for a good solution.

I made the ratings for the LendingClub loans my program produces public. I will update them occasionally (i.e., whenever I feel like it). If you have some suggestions on how to use the textual variables, leave a comment.

Preserving Privacy in Big Data

Wednesday, August 1st, 2012

Interesting blog post on Differential Privacy. I wasn’t aware of this specific privacy model.

Puzzling Outcomes In Controlled Experiments

Friday, July 6th, 2012

A really interesting paper on A/B testing and experiments in online environments just got accepted to KDD 2012:

puzzlingOutcomesInControlledExperiments.pdf

Summary:

  • Don’t make changes to your application if your average customers lifetime value will decline. Understand the change, consider alternative hypothesis, watch several metrics. Ensure that your findings align with the long term strategy so that long term growth is not sacrificed for short term financial gain. Example: one time Bing had a bug, which served poor search results, so distinct queries went up 10% and CTR on advertisements went up 30%.
  • Ensure that your statistic results are trustworthy. Incorrect results may cause bad ideas to be deployed; good ideas may be ruled out by mistake.
  • An upwards trend in a newly launched feature does not imply that users like the feature more. (delayed effect & primacy effect).
  • Often running an experiment longer does not provide extra statistical power. Pick a duration and stick to it. Do not stop tests early (unless you use algorithms to tell you when you have statistical confidence enough to be able to stop your test)
  • Re-run your experiment again if you get surprising results. Investigating the underlying reasons is often worth it.
  • Watch for Carryover Effect… Run A/A experiments. If you use bucketing techniques to assign participants to experiments rerun the exerpiment with a larger test group and with local randomization.

Risk Assessment of Rare Events in adversarial Scenarios

Tuesday, June 21st, 2011

The RAND corporation just published an interesting paper exploring the benefits of using risk prediction to reduce the screening required at airports. You might have noticed various attempts to establish some kind of fast-lane or trusted traveler program. Obvious this is a very sensitive topic and probably hard to get right. Screening certain groups of the population more than others (“profiling”) is generally frowned upon and also not a good idea in general (see “Strong profiling is not mathematically optimal for discovering rare malfeasors on rare event detection“), but what hasn’t been examined much is identifying people that can be considered more “safe” than others. The paper explores that idea and shows that even under the assumption that the bad guys will try and subvert this program that there can be benefits to implementing this solution. The paper is a bit sparse on mathematical details. Certainly an interesting idea, though.

Paper: Assessing the Security Benefits of a Trusted Traveler Program in the Presence of Attempted Attacker Exploitation and Compromise

European Court of Justice ruling (indirectly) on what cannot be used in Insurance Risk Models

Tuesday, March 1st, 2011

Insurers cannot charge different premiums to men and women because of their gender, the European Court of Justice (ECJ) has ruled.

I’m not sure what to think of it. For one, insurance is not about fairness; it’s about risk. An insurance company should be able to use whatever reliable information for determining the true risk to help price policies. From what I’ve read it seems that young men cost ~50% more to insure than young women. This might not be true on an individual level, but it is true across the entire pool people. On the other hand, if all reliable information could be used, then health insurance would naturally be more expensive for people with, e.g., known genetic disorders if it were purely about risk. That wouldn’t be fair either. Legislating what can and cannot be used in what circumstances will be a hard trade off. In the intermediate term this ruling will probably lead to models that are using all sorts of things to work around this ruling in order to get an adequate risk score.