European Court of Justice ruling (indirectly) on what cannot be used in Insurance Risk Models

March 1st, 2011

Insurers cannot charge different premiums to men and women because of their gender, the European Court of Justice (ECJ) has ruled.

I’m not sure what to think of it. For one, insurance is not about fairness; it’s about risk. An insurance company should be able to use whatever reliable information for determining the true risk to help price policies. From what I’ve read it seems that young men cost ~50% more to insure than young women. This might not be true on an individual level, but it is true across the entire pool people. On the other hand, if all reliable information could be used, then health insurance would naturally be more expensive for people with, e.g., known genetic disorders if it were purely about risk. That wouldn’t be fair either. Legislating what can and cannot be used in what circumstances will be a hard trade off. In the intermediate term this ruling will probably lead to models that are using all sorts of things to work around this ruling in order to get an adequate risk score.

Mining of Massive Datasets

December 11th, 2010

Anand Rajaraman and Jeff Ullman wrote a book called Mining of Massive Datasets that can be downloaded for free (PDF, 340 pages, 2MB). It focuses on data mining of very large amounts of data that do not fit in main memory as found on the frequently on the web from an algorithmic point of view.

Edit:Fixed URL

Ideas on communicating risks and probabilities to the general public

December 4th, 2010

I found an interesting article on how to communicate risks and probabilities to the public.

Birthday Paradox

October 17th, 2010

Here’s an interesting real world example for the Birthday Paradox: Lottery number combination repeats itself. Obligatory XKCD link.

Elo Scores and Rating Contestants

August 5th, 2010

Kaggle has a new and interesting competition on building a chess rating algorithm that performs better than the official Elo rating system currently in use. Entrants build their own rating systems based on the results of more than 65,000 historical chess games and then test their algorithms by predicting the results on a holdout set of 7,800 games.

Looks like an interesting problem. The only other thing that comes to my mind literature-wise is that Microsoft built and published their TrueSkill(tm) Ranking System for the XBox in order to match players with similar skills in online games. In the original paper at NIPS, the authors had shown that TrueSkill outperformed Elo.

GraphLab & Parallel Machine Learning

July 11th, 2010

Interesting article:  GraphLab: A New Framework for Parallel Machine Learning

From the abstract:

Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems.

Given all the talk about Map-Reduce, Hadoop etc. this seems like a logical next step to make scaling data mining to large data sets a lot easier.

PHP configuration using htaccess on 1and1 shared hosting

June 15th, 2010

I had some problems setting PHP values for shared hosting on 1and1 and the suggested way from their FAQ using php.ini didn’t work for me. Here are the settings in .htaccess that worked for me:

AddType x-mapp-php5 .php

# PHP 4, Apache 1
<IfModule mod_php4.c>
php_value magic_quotes_gpc 0
php_value register_globals 0
php_value session.auto_start 0
</IfModule>

# PHP 4, Apache 2
<IfModule sapi_apache2.c>
php_value magic_quotes_gpc 0
php_value register_globals 0
php_value session.auto_start 0
</IfModule>

# PHP 5, Apache 1 and 2
<IfModule mod_php5.c>
php_value magic_quotes_gpc 0
php_value register_globals 0
php_value session.auto_start 0
</IfModule>

This instructs the server to use PHP5 and the configuration below is turning off the magic quotes, register globals and session auto start features.

Energy efficient data mining algorithms

February 28th, 2010

I was a bit amused to read about this new algorithm that IBM research developed and that was sold as “energy efficient” in their press-release. This is good marketing, because the average journalist and reader might not understand the impact of the improvement. It just sounds a lot better to be green and save energy than to improve computational complexity…

Alternative measures to the AUC for rare-event prognostic models

February 16th, 2010

How can one evaluate the performance of prognostic models in a meaningful way? This is a very basic and yet an interesting problem especially in the context of prediction of very rare events (base-rates <10%). How reliable is the model’s forecast? This is a good question and of particular importance when it matters – think criminal psychology where models forecast the likelihood of recidivism for criminally insane people (Quinsey 1980). There are a variety of ways to evaluate a model’s predictive performance on a hold out sample, and some are more meaningful than others. For example, when using error-rates one should keep in mind that they are only meaningful when you consider the base-rate of your classes and the trivial classifier as well. Often this gets confusing when you are dealing with very imbalanced data sets or rare events. In this blog post, I’ll summarize a few techniques and alternative evaluation methods for predictive models that are particularly useful when dealing with rare events or low base-rates in general.

The Receiver Operator Characteristic is a graphical measure that plots the true versus false positive rates such that the user can decide where to cut for making the final classification decision. In order to summarize the performance of the graph in a single, reportable number, the area under the curve (AUC) is generally used.

Read the rest of this entry »

Spam Filtering by Learning a Pattern Language

January 26th, 2010

The New Scientist describes a new method for spam detection by learning patterns. This new method exploits the spammers most powerful weapon – the automatic generation of many, similar messages by automated means (i.e., some grammar in a formal language) – and turns it against them. The article reports that a pattern can reliably be learned from about 1000 examples captured from a bot, allowing the method to classify new messages accurately and with zero false positives. This sounds really exciting given my full spam-folder.

However, I’m a bit cautious. The article is a bit sparse on technical details, so I might make some wrong assumptions here. First, zero false positives reported is the discrimination of spam from that particular spam-grammar versus other messages. At least that’s how I understand it. Second, it seems from the article that they only learn from positive examples. Overall the technique sounds to me like they are learning a pattern language. Pattern languages are a class of grammars that overlap with linear and context-sensitive grammars (Chomsky hierarchy). Unfortunately they don’t have a real Wikipedia page so I’ll try to give a bit of background. The closest I can give for an example right now would be regular expressions with back-references. I’m not sure if this is an accurate description for all possible patterns, but it’s close enough for an example.

I don’t know how the specific technique mentioned in the article works in detail, but I’ve learned two things about learning grammars from text: (a) we can’t learn all linear or context-sensitive languages, only all pattern language grammars; (b) learning patterns without negative examples leads to over-generalization really really fast.

While I haven’t worked with learning grammars in a long while, the only algorithm of which I’m aware is the Lange-Wiehagen algorithm (Steffen Lange and Rolf Wiehagen; Polynomial-time inference of arbitrary pattern languages. New Generation Computing, 8(4):361-370, 1991). This algorithm is not a consistent learner, but can learn all pattern languages in polynomial time. There might be better ones available by now, but learning grammars is not that popular in the machine learning community right now. I’m sure there are some other interesting applications besides spam filtering. Maybe it’s time for a revival.

Overall, it sounds like a promising new anti-spam technique, but I’d like to see some more realistic testing done. There are some obvious ways for spammers to make learning these patterns harder, but either way I’m curious – maybe the inventors of this technique discovered a better way to learn patterns? Maybe by using some problem-specific domain knowledge?