Debugging and Evaluating Predictive Models

Speaking of the recent revelation that Moody’s mis-rated CPDOs due to a computer glitch (see also here) it is quite tricky to get models right from construction to production. Interestingly S&P, which gave identical ratings on many of them, stands by their independently developed model and makes me wonder how much models are tested in practice. This made me wonder if I would catch whatever bug while moving a model into production.

Most of the data-mining and modeling work is done in your favorite stats-package (SPSS, SAS etc.), but in production use people have to re-implement whichever equation they came up with to produce their risk-scores. When I’m doing Data Mining work I often use different tools from different authors or vendors to get the job done, as not every single one tool can do everything that I need. For example, I have had a lot of success with predictive modeling using Support Vector Machines. Some newer algorithms I have implemented in Matlab myself. That means I spend a lot of time converting data back and forth between different software (e.g. indicating a missing value). I usually do this with some Perl-scripts I wrote, but the entire process is error prone, especially given that the final model then has to be codified in some other language (Java, C, C#/.Net or whatever) so it can be incorporated into the projects software. It takes a while to get it right, because more often than not an error is not obvious (read: I had bad experiences with subtle errors during black-box testing). The following is my check-list for debugging the process (probably not complete to catch everything):

  • Do the results mean what you think they mean? What values for classification are good or bad? (big/small scores, or +1/-1 …)
  • Features: when exporting/importing the data, is the order of the features the same? Is the classification label in the right place? This is a lot of fun when you export stuff from SPSS/R/STATA to Matlab (which does not support named-columns in a matrix – better get all those indexes right)
  • Where missing values treated the right way when building the model? There are many ways to deal with them, and you might have either case of MAR (“missing at random”), MCAR (“missing completely at random”), NMAR (“not missing at random”), non-response and imputation (single and multiple) etc.
  • Did I deal with special values correctly? I’m not talking about the NULL value in the database, but “flag-values” such as 999 etc. to indicate a certain condition
  • When exporting/importing data, are special values (e.g. missing, flag and categorical values) handled correctly? Every program encodes them differently, especially when you use comma separated text (csv)
  • Is the scaling of the data the same? Are the scaled values of the new data larger/smaller than what the classifier was trained on? How are these cases handled?
  • Are the algorithm parameters the same, e.g. a kernel-sigma in a support vector model?
  • Is the new data from the same distribution? (When I hear “similar”, then it usually does not work in my experience 🙂 ) Check mean and variance for a start. Sometimes the difference can be subtle (e.g. a male-only model applied to females; this can work, depending on what was modeled). Was the data extracted the same way with the same SQL query? Was the query translated correctly into SQL? Was the result prepared the same way (recodes, scaling)?
  • In my code I check for each attribute if it is within the range of my training set. If not, then it’s either a bug (scaling?) or the model can’t reliably predict for this case.
  • Some simple test-cases, computed with your Stats-Package and your production code. I had a lot of success with White-Box tests in testing recode-tables etc.

As for the model evaluation I’ve read some reports in the past (not financial scorings, though) were testing was done on the training set. Obviously model quality should be assessed on a hold-out data set that has not been used for training or parameter tuning. Model quality in the machine learning community is still often evaluated using error-rate, but lately Area under the Receiver-Operator Characteristic has become popular (often abbreviated as AuROC, AUROC, AROC or ROC), which I found to be especially useful for imbalanced datasets. In Meteorology a lot of thought has been placed into the evaluation and comparison of the performance of different predictive models. Wilks Cost-Curves and Brier Skill-Scores look really interesting. In some models, although the predictor is trained on a dichotomous variable, is really predicting some risk over time – and should be evaluated using survival analysis (e.g. higher risk-scores should lead to sooner failure etc.). In survival analysis a different version of the AuROC is used called the concordance index. I’ll post some of my thoughts on all the evaluation scores some time in the future.

Comments are closed.