Just got back home from NIPS. The following papers I found pretty interesting:
- Random Features for Large-Scale Kernel Machines
- On Ranking in Survival Analysis: Bounds on the Concordance Index
- Efficient Inference for Distributions on Permutations
The workshop time I spend in the Learning Problem design and in the Security workshop. I also dropped by in “statistical networks” briefly, but there’s room for improvement in my current understanding of Gibbs sampling and the like. The consensus in the problem design workshop seemed to be that machine learning must become more modular. Also there was agreement that the application of machine learning in the real world requires some magic for transforming the problem into “features” and some more magic for transforming the prediction into something useful. It was stating the obvious a bit, however not much progress has been made in this area of making ML more accessible. I wrote about choosing the right features before, but currently it’s more of an art than a science. One thing I took from the security workshop was that features must be easily constructed (most detection apps must run in real-time). This means we are interested in features the attacker can hardly influence (think received-headers in Spam-emails that can not be suppressed), yet they must be easily to compute.
Also really cool was the “NIPS Elevator Process” joke-paper about hungry scientists on the way to lunch (don’t confuse it with the Chinese restaurant process) and the party crashers at the Gatsby party. Sophie and some friends of hers simply joined the fun. The fun part was people taking her random answers for her research topic seriously 🙂 I got mistaken for one of the party crashers at one point, because I didn’t fit in with my clothing. I was actually planning on hitting a club in downtown Whistler, but didn’t get around to go in the end…